Nov 25 06:47:04 crc systemd[1]: Starting Kubernetes Kubelet... Nov 25 06:47:04 crc restorecon[4463]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:04 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 06:47:05 crc restorecon[4463]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 06:47:05 crc restorecon[4463]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 25 06:47:05 crc kubenswrapper[4482]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 06:47:05 crc kubenswrapper[4482]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 25 06:47:05 crc kubenswrapper[4482]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 06:47:05 crc kubenswrapper[4482]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 06:47:05 crc kubenswrapper[4482]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 25 06:47:05 crc kubenswrapper[4482]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.699698 4482 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703048 4482 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703065 4482 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703070 4482 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703074 4482 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703077 4482 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703083 4482 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703088 4482 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703093 4482 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703096 4482 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703100 4482 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703110 4482 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703114 4482 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703118 4482 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703123 4482 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703129 4482 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703133 4482 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703136 4482 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703140 4482 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703143 4482 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703146 4482 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703149 4482 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703152 4482 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703155 4482 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703160 4482 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703164 4482 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703183 4482 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703186 4482 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703189 4482 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703192 4482 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703195 4482 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703198 4482 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703202 4482 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703205 4482 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703211 4482 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703215 4482 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703219 4482 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703224 4482 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703227 4482 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703233 4482 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703237 4482 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703241 4482 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703245 4482 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703249 4482 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703253 4482 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703257 4482 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703260 4482 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703263 4482 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703266 4482 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703270 4482 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703273 4482 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703276 4482 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703279 4482 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703282 4482 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703286 4482 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703289 4482 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703292 4482 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703295 4482 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703298 4482 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703301 4482 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703304 4482 feature_gate.go:330] unrecognized feature gate: Example Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703307 4482 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703311 4482 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703314 4482 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703317 4482 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703320 4482 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703323 4482 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703326 4482 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703332 4482 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703337 4482 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703340 4482 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.703343 4482 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703733 4482 flags.go:64] FLAG: --address="0.0.0.0" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703745 4482 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703751 4482 flags.go:64] FLAG: --anonymous-auth="true" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703756 4482 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703760 4482 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703764 4482 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703769 4482 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703773 4482 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703777 4482 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703781 4482 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703785 4482 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703789 4482 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703792 4482 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703796 4482 flags.go:64] FLAG: --cgroup-root="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703799 4482 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703803 4482 flags.go:64] FLAG: --client-ca-file="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703806 4482 flags.go:64] FLAG: --cloud-config="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703810 4482 flags.go:64] FLAG: --cloud-provider="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703814 4482 flags.go:64] FLAG: --cluster-dns="[]" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703820 4482 flags.go:64] FLAG: --cluster-domain="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703824 4482 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703827 4482 flags.go:64] FLAG: --config-dir="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703831 4482 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703835 4482 flags.go:64] FLAG: --container-log-max-files="5" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703840 4482 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703844 4482 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703847 4482 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703851 4482 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703855 4482 flags.go:64] FLAG: --contention-profiling="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703859 4482 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703862 4482 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703866 4482 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703871 4482 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703876 4482 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703880 4482 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703883 4482 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703887 4482 flags.go:64] FLAG: --enable-load-reader="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703891 4482 flags.go:64] FLAG: --enable-server="true" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703894 4482 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703899 4482 flags.go:64] FLAG: --event-burst="100" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703902 4482 flags.go:64] FLAG: --event-qps="50" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703906 4482 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703909 4482 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703913 4482 flags.go:64] FLAG: --eviction-hard="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703918 4482 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703922 4482 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703925 4482 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703929 4482 flags.go:64] FLAG: --eviction-soft="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703933 4482 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703936 4482 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703941 4482 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703945 4482 flags.go:64] FLAG: --experimental-mounter-path="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703949 4482 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703954 4482 flags.go:64] FLAG: --fail-swap-on="true" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703957 4482 flags.go:64] FLAG: --feature-gates="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703961 4482 flags.go:64] FLAG: --file-check-frequency="20s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703965 4482 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703969 4482 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703972 4482 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703977 4482 flags.go:64] FLAG: --healthz-port="10248" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703981 4482 flags.go:64] FLAG: --help="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703984 4482 flags.go:64] FLAG: --hostname-override="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703988 4482 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703991 4482 flags.go:64] FLAG: --http-check-frequency="20s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703995 4482 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.703999 4482 flags.go:64] FLAG: --image-credential-provider-config="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704002 4482 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704006 4482 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704010 4482 flags.go:64] FLAG: --image-service-endpoint="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704014 4482 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704018 4482 flags.go:64] FLAG: --kube-api-burst="100" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704022 4482 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704026 4482 flags.go:64] FLAG: --kube-api-qps="50" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704039 4482 flags.go:64] FLAG: --kube-reserved="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704043 4482 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704048 4482 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704052 4482 flags.go:64] FLAG: --kubelet-cgroups="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704056 4482 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704060 4482 flags.go:64] FLAG: --lock-file="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704063 4482 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704067 4482 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704071 4482 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704077 4482 flags.go:64] FLAG: --log-json-split-stream="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704081 4482 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704085 4482 flags.go:64] FLAG: --log-text-split-stream="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704089 4482 flags.go:64] FLAG: --logging-format="text" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704093 4482 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704097 4482 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704101 4482 flags.go:64] FLAG: --manifest-url="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704104 4482 flags.go:64] FLAG: --manifest-url-header="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704110 4482 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704114 4482 flags.go:64] FLAG: --max-open-files="1000000" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704119 4482 flags.go:64] FLAG: --max-pods="110" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704123 4482 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704126 4482 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704130 4482 flags.go:64] FLAG: --memory-manager-policy="None" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704134 4482 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704138 4482 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704141 4482 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704145 4482 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704154 4482 flags.go:64] FLAG: --node-status-max-images="50" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704158 4482 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704161 4482 flags.go:64] FLAG: --oom-score-adj="-999" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704179 4482 flags.go:64] FLAG: --pod-cidr="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704184 4482 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704190 4482 flags.go:64] FLAG: --pod-manifest-path="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704194 4482 flags.go:64] FLAG: --pod-max-pids="-1" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704198 4482 flags.go:64] FLAG: --pods-per-core="0" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704202 4482 flags.go:64] FLAG: --port="10250" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704206 4482 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704210 4482 flags.go:64] FLAG: --provider-id="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704214 4482 flags.go:64] FLAG: --qos-reserved="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704218 4482 flags.go:64] FLAG: --read-only-port="10255" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704222 4482 flags.go:64] FLAG: --register-node="true" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704226 4482 flags.go:64] FLAG: --register-schedulable="true" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704230 4482 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704236 4482 flags.go:64] FLAG: --registry-burst="10" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704240 4482 flags.go:64] FLAG: --registry-qps="5" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704244 4482 flags.go:64] FLAG: --reserved-cpus="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704248 4482 flags.go:64] FLAG: --reserved-memory="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704252 4482 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704256 4482 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704260 4482 flags.go:64] FLAG: --rotate-certificates="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704264 4482 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704268 4482 flags.go:64] FLAG: --runonce="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704271 4482 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704275 4482 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704279 4482 flags.go:64] FLAG: --seccomp-default="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704283 4482 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704286 4482 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704290 4482 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704294 4482 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704297 4482 flags.go:64] FLAG: --storage-driver-password="root" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704301 4482 flags.go:64] FLAG: --storage-driver-secure="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704305 4482 flags.go:64] FLAG: --storage-driver-table="stats" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704308 4482 flags.go:64] FLAG: --storage-driver-user="root" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704312 4482 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704315 4482 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704319 4482 flags.go:64] FLAG: --system-cgroups="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704323 4482 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704329 4482 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704333 4482 flags.go:64] FLAG: --tls-cert-file="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704336 4482 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704343 4482 flags.go:64] FLAG: --tls-min-version="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704347 4482 flags.go:64] FLAG: --tls-private-key-file="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704352 4482 flags.go:64] FLAG: --topology-manager-policy="none" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704357 4482 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704361 4482 flags.go:64] FLAG: --topology-manager-scope="container" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704365 4482 flags.go:64] FLAG: --v="2" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704371 4482 flags.go:64] FLAG: --version="false" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704376 4482 flags.go:64] FLAG: --vmodule="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704380 4482 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704384 4482 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704474 4482 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704479 4482 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704484 4482 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704487 4482 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704491 4482 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704494 4482 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704498 4482 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704501 4482 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704505 4482 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704509 4482 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704513 4482 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704516 4482 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704519 4482 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704522 4482 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704526 4482 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704529 4482 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704534 4482 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704538 4482 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704542 4482 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704546 4482 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704550 4482 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704553 4482 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704556 4482 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704564 4482 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704568 4482 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704571 4482 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704575 4482 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704580 4482 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704583 4482 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704587 4482 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704591 4482 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704594 4482 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704598 4482 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704601 4482 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704604 4482 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704607 4482 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704610 4482 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704613 4482 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704617 4482 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704620 4482 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704623 4482 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704626 4482 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704629 4482 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704633 4482 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704636 4482 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704639 4482 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704642 4482 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704646 4482 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704649 4482 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704652 4482 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704655 4482 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704658 4482 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704661 4482 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704665 4482 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704668 4482 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704672 4482 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704676 4482 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704679 4482 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704684 4482 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704688 4482 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704691 4482 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704695 4482 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704698 4482 feature_gate.go:330] unrecognized feature gate: Example Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704701 4482 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704704 4482 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704708 4482 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704711 4482 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704714 4482 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704717 4482 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704720 4482 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.704726 4482 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.704736 4482 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.710605 4482 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.710636 4482 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710706 4482 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710715 4482 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710720 4482 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710723 4482 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710727 4482 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710731 4482 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710736 4482 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710740 4482 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710743 4482 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710746 4482 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710749 4482 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710753 4482 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710756 4482 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710759 4482 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710763 4482 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710767 4482 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710770 4482 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710773 4482 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710776 4482 feature_gate.go:330] unrecognized feature gate: Example Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710779 4482 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710782 4482 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710785 4482 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710789 4482 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710792 4482 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710795 4482 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710798 4482 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710801 4482 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710804 4482 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710807 4482 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710811 4482 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710814 4482 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710817 4482 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710820 4482 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710823 4482 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710827 4482 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710832 4482 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710836 4482 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710840 4482 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710846 4482 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710850 4482 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710854 4482 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710857 4482 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710860 4482 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710864 4482 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710867 4482 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710870 4482 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710873 4482 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710876 4482 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710880 4482 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710883 4482 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710887 4482 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710891 4482 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710896 4482 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710900 4482 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710904 4482 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710908 4482 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710911 4482 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710914 4482 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710921 4482 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710925 4482 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710929 4482 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710932 4482 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710936 4482 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710940 4482 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710945 4482 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710949 4482 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710953 4482 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710957 4482 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710960 4482 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710964 4482 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.710967 4482 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.710974 4482 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711087 4482 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711093 4482 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711097 4482 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711101 4482 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711104 4482 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711107 4482 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711110 4482 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711115 4482 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711119 4482 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711123 4482 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711126 4482 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711130 4482 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711134 4482 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711138 4482 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711142 4482 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711146 4482 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711149 4482 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711153 4482 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711157 4482 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711160 4482 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711163 4482 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711179 4482 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711183 4482 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711186 4482 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711189 4482 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711193 4482 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711196 4482 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711199 4482 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711202 4482 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711205 4482 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711208 4482 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711212 4482 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711215 4482 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711218 4482 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711221 4482 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711224 4482 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711227 4482 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711231 4482 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711236 4482 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711240 4482 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711244 4482 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711247 4482 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711251 4482 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711256 4482 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711260 4482 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711264 4482 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711267 4482 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711271 4482 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711274 4482 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711278 4482 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711283 4482 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711287 4482 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711291 4482 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711294 4482 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711298 4482 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711301 4482 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711304 4482 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711308 4482 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711311 4482 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711314 4482 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711317 4482 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711320 4482 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711323 4482 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711326 4482 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711329 4482 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711333 4482 feature_gate.go:330] unrecognized feature gate: Example Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711336 4482 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711339 4482 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711342 4482 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711345 4482 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.711349 4482 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.711354 4482 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.711495 4482 server.go:940] "Client rotation is on, will bootstrap in background" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.714239 4482 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.714534 4482 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.715391 4482 server.go:997] "Starting client certificate rotation" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.715417 4482 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.715612 4482 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-19 04:55:43.066833104 +0000 UTC Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.715690 4482 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.730907 4482 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 06:47:05 crc kubenswrapper[4482]: E1125 06:47:05.731780 4482 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.26.133:6443: connect: connection refused" logger="UnhandledError" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.732354 4482 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.745731 4482 log.go:25] "Validated CRI v1 runtime API" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.764051 4482 log.go:25] "Validated CRI v1 image API" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.765628 4482 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.768738 4482 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-25-06-43-24-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.768777 4482 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:49 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:50 fsType:tmpfs blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/94b752e0a51c0134b00ddef6dc7a933a9d7c1d9bdc88a18dae4192a0d557d623/merged major:0 minor:43 fsType:overlay blockSize:0}] Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.780197 4482 manager.go:217] Machine: {Timestamp:2025-11-25 06:47:05.77882643 +0000 UTC m=+0.267057709 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2445406 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:dc9d32b7-fef4-46db-bcb5-f2930afc514b BootID:1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:49 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:50 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:65536000 Type:vfs Inodes:3076108 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:cb:4f:71 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:enp3s0 MacAddress:fa:16:3e:cb:4f:71 Speed:-1 Mtu:1500} {Name:enp7s0 MacAddress:fa:16:3e:24:37:d3 Speed:-1 Mtu:1440} {Name:enp7s0.20 MacAddress:52:54:00:cf:82:21 Speed:-1 Mtu:1436} {Name:enp7s0.21 MacAddress:52:54:00:25:c3:a6 Speed:-1 Mtu:1436} {Name:enp7s0.22 MacAddress:52:54:00:81:e8:a8 Speed:-1 Mtu:1436} {Name:eth10 MacAddress:46:5e:98:76:f5:3e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:02:81:d9:4f:6d:fd Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:65536 Type:Data Level:1} {Id:0 Size:65536 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:65536 Type:Data Level:1} {Id:1 Size:65536 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:65536 Type:Data Level:1} {Id:2 Size:65536 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:65536 Type:Data Level:1} {Id:3 Size:65536 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:65536 Type:Data Level:1} {Id:4 Size:65536 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:65536 Type:Data Level:1} {Id:5 Size:65536 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:65536 Type:Data Level:1} {Id:6 Size:65536 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:65536 Type:Data Level:1} {Id:7 Size:65536 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.780364 4482 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.780463 4482 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.781459 4482 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.781674 4482 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.781753 4482 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.781994 4482 topology_manager.go:138] "Creating topology manager with none policy" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.782043 4482 container_manager_linux.go:303] "Creating device plugin manager" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.782399 4482 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.782472 4482 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.782595 4482 state_mem.go:36] "Initialized new in-memory state store" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.782711 4482 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.784466 4482 kubelet.go:418] "Attempting to sync node with API server" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.784533 4482 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.784588 4482 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.784636 4482 kubelet.go:324] "Adding apiserver pod source" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.784683 4482 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.786654 4482 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.787251 4482 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.787260 4482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.26.133:6443: connect: connection refused Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.787259 4482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.26.133:6443: connect: connection refused Nov 25 06:47:05 crc kubenswrapper[4482]: E1125 06:47:05.787318 4482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.26.133:6443: connect: connection refused" logger="UnhandledError" Nov 25 06:47:05 crc kubenswrapper[4482]: E1125 06:47:05.787337 4482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.26.133:6443: connect: connection refused" logger="UnhandledError" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.788646 4482 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.789526 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.789545 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.789554 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.789560 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.789574 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.789580 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.789586 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.789596 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.789603 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.789610 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.789619 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.789626 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.790390 4482 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.790788 4482 server.go:1280] "Started kubelet" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.790983 4482 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.26.133:6443: connect: connection refused Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.791356 4482 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.791651 4482 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.791691 4482 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 25 06:47:05 crc systemd[1]: Started Kubernetes Kubelet. Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.792792 4482 server.go:460] "Adding debug handlers to kubelet server" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.792841 4482 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.792861 4482 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 25 06:47:05 crc kubenswrapper[4482]: E1125 06:47:05.793115 4482 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.793489 4482 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.793506 4482 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.793596 4482 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.793818 4482 factory.go:55] Registering systemd factory Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.793836 4482 factory.go:221] Registration of the systemd container factory successfully Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.793976 4482 factory.go:153] Registering CRI-O factory Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.793989 4482 factory.go:221] Registration of the crio container factory successfully Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.794043 4482 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.794061 4482 factory.go:103] Registering Raw factory Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.794074 4482 manager.go:1196] Started watching for new ooms in manager Nov 25 06:47:05 crc kubenswrapper[4482]: E1125 06:47:05.794426 4482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" interval="200ms" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.794504 4482 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:43:41.417411781 +0000 UTC Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.794583 4482 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 388h56m35.622836406s for next certificate rotation Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.794701 4482 manager.go:319] Starting recovery of all containers Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.794710 4482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.26.133:6443: connect: connection refused Nov 25 06:47:05 crc kubenswrapper[4482]: E1125 06:47:05.794756 4482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.26.133:6443: connect: connection refused" logger="UnhandledError" Nov 25 06:47:05 crc kubenswrapper[4482]: E1125 06:47:05.801455 4482 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.26.133:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b2d100023fb9c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 06:47:05.79076598 +0000 UTC m=+0.278997238,LastTimestamp:2025-11-25 06:47:05.79076598 +0000 UTC m=+0.278997238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807079 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807744 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807774 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807787 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807796 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807806 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807815 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807824 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807869 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807880 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807890 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807900 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807922 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807935 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807944 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807953 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807964 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807973 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807981 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.807990 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808010 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808018 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808027 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808047 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808056 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808065 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808077 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808087 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808096 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808109 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808118 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808127 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808136 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808157 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808166 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808191 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808200 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808209 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808217 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808231 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808243 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808253 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808276 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808286 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808295 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808303 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808311 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808320 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808330 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808339 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808348 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808357 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808368 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808378 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808388 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808396 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808406 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808414 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808423 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808432 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808441 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808449 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808459 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808468 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808476 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808484 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808492 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808502 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808510 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808518 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808532 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808541 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808551 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808560 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808569 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808579 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808588 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808596 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808606 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808615 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808627 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808636 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808646 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808656 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808665 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808675 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808683 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808692 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808702 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808711 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808719 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808729 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808738 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808746 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808784 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808795 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808804 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808813 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808820 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808830 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808838 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808847 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808855 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808864 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808876 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808901 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808911 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808921 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.808930 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809820 4482 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809842 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809854 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809864 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809874 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809884 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809893 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809902 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809910 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809918 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809928 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809936 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809945 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809953 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809961 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809979 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809988 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.809997 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810004 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810013 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810021 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810029 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810039 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810047 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810056 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810063 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810071 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810079 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810086 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810094 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810113 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810123 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810131 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810139 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810158 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810185 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810193 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810201 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810210 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810219 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810227 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810236 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810249 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810257 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810266 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810274 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810282 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810290 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810300 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810309 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810318 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810327 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810338 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810349 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810360 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810369 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810378 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810389 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810397 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810408 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810416 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810425 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810433 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810442 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810449 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810459 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810469 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810477 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810487 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810494 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810504 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810512 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810520 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810530 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810541 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810549 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810558 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810566 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810574 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810583 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810592 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810601 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810609 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810617 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810625 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810633 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810643 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810653 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810661 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810670 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810680 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810689 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810698 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810708 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810716 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810724 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810732 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810740 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810748 4482 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810756 4482 reconstruct.go:97] "Volume reconstruction finished" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.810764 4482 reconciler.go:26] "Reconciler: start to sync state" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.824911 4482 manager.go:324] Recovery completed Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.828219 4482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.829511 4482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.829557 4482 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.829583 4482 kubelet.go:2335] "Starting kubelet main sync loop" Nov 25 06:47:05 crc kubenswrapper[4482]: E1125 06:47:05.829737 4482 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 25 06:47:05 crc kubenswrapper[4482]: W1125 06:47:05.830636 4482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.26.133:6443: connect: connection refused Nov 25 06:47:05 crc kubenswrapper[4482]: E1125 06:47:05.830775 4482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.26.133:6443: connect: connection refused" logger="UnhandledError" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.832519 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.834321 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.834357 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.834367 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.834992 4482 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.835012 4482 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.835030 4482 state_mem.go:36] "Initialized new in-memory state store" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.840802 4482 policy_none.go:49] "None policy: Start" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.841366 4482 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.841392 4482 state_mem.go:35] "Initializing new in-memory state store" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.874066 4482 manager.go:334] "Starting Device Plugin manager" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.874107 4482 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.874119 4482 server.go:79] "Starting device plugin registration server" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.874435 4482 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.874451 4482 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.875039 4482 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.875108 4482 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.875119 4482 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 25 06:47:05 crc kubenswrapper[4482]: E1125 06:47:05.881840 4482 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.930852 4482 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.930946 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.931767 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.931795 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.931804 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.931905 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.932094 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.932125 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.932436 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.932459 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.932486 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.932592 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.932666 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.932686 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.932694 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.932791 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.932846 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.933394 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.933424 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.933435 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.933532 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.933551 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.933534 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.933595 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.933617 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.933560 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.934104 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.934124 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.934131 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.934656 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.934674 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.934681 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.934748 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.934884 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.934914 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.935252 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.935267 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.935274 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.935372 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.935396 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.935591 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.935608 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.935615 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.935875 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.935893 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.935901 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.975569 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.976024 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.976046 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.976054 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:05 crc kubenswrapper[4482]: I1125 06:47:05.976067 4482 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 06:47:05 crc kubenswrapper[4482]: E1125 06:47:05.976395 4482 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.26.133:6443: connect: connection refused" node="crc" Nov 25 06:47:05 crc kubenswrapper[4482]: E1125 06:47:05.995055 4482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" interval="400ms" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014303 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014358 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014383 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014397 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014429 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014445 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014462 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014476 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014503 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014524 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014539 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014583 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014603 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014616 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.014630 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115422 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115459 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115572 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115589 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115479 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115636 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115650 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115673 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115683 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115680 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115718 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115748 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115725 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115699 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115809 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115828 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115844 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115859 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115877 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115896 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115914 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115922 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115932 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115958 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.115980 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.116018 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.116046 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.116066 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.116087 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.116110 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.177477 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.178527 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.178570 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.178581 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.178619 4482 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 06:47:06 crc kubenswrapper[4482]: E1125 06:47:06.179010 4482 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.26.133:6443: connect: connection refused" node="crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.260322 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.279262 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: W1125 06:47:06.280351 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f171c2c86303e17a37554abe5352b3a2230ef872ded88bfdbc8f63fb43374d08 WatchSource:0}: Error finding container f171c2c86303e17a37554abe5352b3a2230ef872ded88bfdbc8f63fb43374d08: Status 404 returned error can't find the container with id f171c2c86303e17a37554abe5352b3a2230ef872ded88bfdbc8f63fb43374d08 Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.295381 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: W1125 06:47:06.295711 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-cc10b58bc63204d96ba891bee62c8d0f0b8f5dfdc07333ac6a2ddaad03bfcc38 WatchSource:0}: Error finding container cc10b58bc63204d96ba891bee62c8d0f0b8f5dfdc07333ac6a2ddaad03bfcc38: Status 404 returned error can't find the container with id cc10b58bc63204d96ba891bee62c8d0f0b8f5dfdc07333ac6a2ddaad03bfcc38 Nov 25 06:47:06 crc kubenswrapper[4482]: W1125 06:47:06.302829 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e7d38571b8a570abdaa5b5da391d935171afe5f34c4d08062a4bc344c94c6219 WatchSource:0}: Error finding container e7d38571b8a570abdaa5b5da391d935171afe5f34c4d08062a4bc344c94c6219: Status 404 returned error can't find the container with id e7d38571b8a570abdaa5b5da391d935171afe5f34c4d08062a4bc344c94c6219 Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.307097 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.311688 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 06:47:06 crc kubenswrapper[4482]: W1125 06:47:06.322226 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-cd17091cb531ac53a3d3b29358a2ef76e72a1d04e4adb8c3deb9fb66d55dfd08 WatchSource:0}: Error finding container cd17091cb531ac53a3d3b29358a2ef76e72a1d04e4adb8c3deb9fb66d55dfd08: Status 404 returned error can't find the container with id cd17091cb531ac53a3d3b29358a2ef76e72a1d04e4adb8c3deb9fb66d55dfd08 Nov 25 06:47:06 crc kubenswrapper[4482]: W1125 06:47:06.323940 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-cdfdfe7a85b1ae50d6677860831bec6d750904633072d9b2472e3018ea9c4048 WatchSource:0}: Error finding container cdfdfe7a85b1ae50d6677860831bec6d750904633072d9b2472e3018ea9c4048: Status 404 returned error can't find the container with id cdfdfe7a85b1ae50d6677860831bec6d750904633072d9b2472e3018ea9c4048 Nov 25 06:47:06 crc kubenswrapper[4482]: E1125 06:47:06.395779 4482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" interval="800ms" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.579836 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.583117 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.583200 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.583220 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.583259 4482 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 06:47:06 crc kubenswrapper[4482]: E1125 06:47:06.583824 4482 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.26.133:6443: connect: connection refused" node="crc" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.792378 4482 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.26.133:6443: connect: connection refused Nov 25 06:47:06 crc kubenswrapper[4482]: W1125 06:47:06.813897 4482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.26.133:6443: connect: connection refused Nov 25 06:47:06 crc kubenswrapper[4482]: E1125 06:47:06.814411 4482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.26.133:6443: connect: connection refused" logger="UnhandledError" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.835080 4482 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28" exitCode=0 Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.835139 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28"} Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.835240 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cd17091cb531ac53a3d3b29358a2ef76e72a1d04e4adb8c3deb9fb66d55dfd08"} Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.835346 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.836259 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.836307 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.836316 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.837074 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd"} Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.837127 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cdfdfe7a85b1ae50d6677860831bec6d750904633072d9b2472e3018ea9c4048"} Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.838312 4482 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a" exitCode=0 Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.838363 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a"} Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.838380 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7d38571b8a570abdaa5b5da391d935171afe5f34c4d08062a4bc344c94c6219"} Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.838438 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.839042 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.839067 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.839078 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.839918 4482 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8553cf63d28a7716a6e99bb815f823963e3c270a832cab11f708e49df7fe603b" exitCode=0 Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.839962 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8553cf63d28a7716a6e99bb815f823963e3c270a832cab11f708e49df7fe603b"} Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.839977 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cc10b58bc63204d96ba891bee62c8d0f0b8f5dfdc07333ac6a2ddaad03bfcc38"} Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.840043 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.840493 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.841037 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.841061 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.841071 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.841617 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.841647 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.841657 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.842391 4482 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ac50938bda83c23f2391068a14a8c5f84554f1181814baf540b75713d7aa7493" exitCode=0 Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.842407 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ac50938bda83c23f2391068a14a8c5f84554f1181814baf540b75713d7aa7493"} Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.842423 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f171c2c86303e17a37554abe5352b3a2230ef872ded88bfdbc8f63fb43374d08"} Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.842469 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.843031 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.843068 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:06 crc kubenswrapper[4482]: I1125 06:47:06.843077 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:06 crc kubenswrapper[4482]: E1125 06:47:06.921276 4482 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.26.133:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b2d100023fb9c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 06:47:05.79076598 +0000 UTC m=+0.278997238,LastTimestamp:2025-11-25 06:47:05.79076598 +0000 UTC m=+0.278997238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 06:47:06 crc kubenswrapper[4482]: W1125 06:47:06.970416 4482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.26.133:6443: connect: connection refused Nov 25 06:47:06 crc kubenswrapper[4482]: E1125 06:47:06.970524 4482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.26.133:6443: connect: connection refused" logger="UnhandledError" Nov 25 06:47:07 crc kubenswrapper[4482]: W1125 06:47:07.137750 4482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.26.133:6443: connect: connection refused Nov 25 06:47:07 crc kubenswrapper[4482]: E1125 06:47:07.137852 4482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.26.133:6443: connect: connection refused" logger="UnhandledError" Nov 25 06:47:07 crc kubenswrapper[4482]: E1125 06:47:07.196109 4482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" interval="1.6s" Nov 25 06:47:07 crc kubenswrapper[4482]: W1125 06:47:07.274201 4482 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.26.133:6443: connect: connection refused Nov 25 06:47:07 crc kubenswrapper[4482]: E1125 06:47:07.274284 4482 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.26.133:6443: connect: connection refused" logger="UnhandledError" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.384231 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.385245 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.386449 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.386593 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.386629 4482 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 06:47:07 crc kubenswrapper[4482]: E1125 06:47:07.387003 4482 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.26.133:6443: connect: connection refused" node="crc" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.758277 4482 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 06:47:07 crc kubenswrapper[4482]: E1125 06:47:07.759083 4482 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.26.133:6443: connect: connection refused" logger="UnhandledError" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.846479 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"afabe0c26cf96847b662a1236a8d5f22205769282690735780ef24580c394cb5"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.846575 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.847336 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.847362 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.847371 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.849328 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e7c736aa6a7231244785b8651eda784a6aa13f745d1e95a7d4963458ebe6647d"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.849378 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"645d4b2d1e65d0d5b0e29914ac6e7ac26a91d65ad5ea42a309e983cf633e9fb2"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.849391 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f3d5f730d9fc2cf67bca05c6b7ca8035f813d91a8ac6b069f70457b5a63e9d9e"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.849481 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.850076 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.850104 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.850115 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.851787 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.851828 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.851843 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.851846 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.852513 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.852559 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.852569 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.854908 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.854934 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.854945 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.854953 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.854962 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.855039 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.855558 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.855587 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.855599 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.856908 4482 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="53f69a24f1c1cabfe32d3ee36250ff2af116c1aebe35d1f9883454cbaa66918f" exitCode=0 Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.856934 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"53f69a24f1c1cabfe32d3ee36250ff2af116c1aebe35d1f9883454cbaa66918f"} Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.857046 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.857641 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.857670 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:07 crc kubenswrapper[4482]: I1125 06:47:07.857681 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.338749 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.510815 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.861995 4482 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d5aad9b71aaec08ec8ad8b9b321d52be182b58f5f8de85c1c6b87857f2d7af0c" exitCode=0 Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.862108 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.862155 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.862193 4482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.862240 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.862420 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d5aad9b71aaec08ec8ad8b9b321d52be182b58f5f8de85c1c6b87857f2d7af0c"} Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.862685 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.863303 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.863316 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.863331 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.863342 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.863342 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.863450 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.863494 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.863510 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.863518 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.863757 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.863791 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.863803 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.987605 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.988498 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.988527 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.988536 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:08 crc kubenswrapper[4482]: I1125 06:47:08.988556 4482 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 06:47:09 crc kubenswrapper[4482]: I1125 06:47:09.868698 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"44dc5064047c99e4e68086e62e10665a650905f8f6e5ef6e6c829802ecd2ebfb"} Nov 25 06:47:09 crc kubenswrapper[4482]: I1125 06:47:09.868769 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a048ed10ebdb87ca57b7db08bf15bf22a6f89bb2e4a9a0c65862cb949aaf12c8"} Nov 25 06:47:09 crc kubenswrapper[4482]: I1125 06:47:09.868784 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cba1700c0555a48399a3600c1af86b8b583eff231a7a821d1b56415ed921c44b"} Nov 25 06:47:09 crc kubenswrapper[4482]: I1125 06:47:09.868793 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8709f43d6d41a907d6ea4c08be2005972df9da67d65eedab232c0d86997e7f6c"} Nov 25 06:47:09 crc kubenswrapper[4482]: I1125 06:47:09.868803 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e73801accce2339ba7e2ce18619fed860176d1385fda2ee9faccdb5bb1d1b7df"} Nov 25 06:47:09 crc kubenswrapper[4482]: I1125 06:47:09.868966 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:09 crc kubenswrapper[4482]: I1125 06:47:09.869805 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:09 crc kubenswrapper[4482]: I1125 06:47:09.869844 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:09 crc kubenswrapper[4482]: I1125 06:47:09.869856 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:10 crc kubenswrapper[4482]: I1125 06:47:10.200439 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 25 06:47:10 crc kubenswrapper[4482]: I1125 06:47:10.874247 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:10 crc kubenswrapper[4482]: I1125 06:47:10.875129 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:10 crc kubenswrapper[4482]: I1125 06:47:10.875164 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:10 crc kubenswrapper[4482]: I1125 06:47:10.875189 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:10 crc kubenswrapper[4482]: I1125 06:47:10.923815 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:10 crc kubenswrapper[4482]: I1125 06:47:10.923960 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:10 crc kubenswrapper[4482]: I1125 06:47:10.924691 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:10 crc kubenswrapper[4482]: I1125 06:47:10.924740 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:10 crc kubenswrapper[4482]: I1125 06:47:10.924752 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:11 crc kubenswrapper[4482]: I1125 06:47:11.004208 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:11 crc kubenswrapper[4482]: I1125 06:47:11.100675 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:11 crc kubenswrapper[4482]: I1125 06:47:11.100883 4482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 06:47:11 crc kubenswrapper[4482]: I1125 06:47:11.100936 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:11 crc kubenswrapper[4482]: I1125 06:47:11.101946 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:11 crc kubenswrapper[4482]: I1125 06:47:11.101974 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:11 crc kubenswrapper[4482]: I1125 06:47:11.101982 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:11 crc kubenswrapper[4482]: I1125 06:47:11.876395 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:11 crc kubenswrapper[4482]: I1125 06:47:11.877226 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:11 crc kubenswrapper[4482]: I1125 06:47:11.877257 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:11 crc kubenswrapper[4482]: I1125 06:47:11.877268 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:11 crc kubenswrapper[4482]: I1125 06:47:11.988672 4482 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 06:47:12 crc kubenswrapper[4482]: I1125 06:47:12.328064 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:12 crc kubenswrapper[4482]: I1125 06:47:12.328226 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:12 crc kubenswrapper[4482]: I1125 06:47:12.329116 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:12 crc kubenswrapper[4482]: I1125 06:47:12.329164 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:12 crc kubenswrapper[4482]: I1125 06:47:12.329189 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:12 crc kubenswrapper[4482]: I1125 06:47:12.625907 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 25 06:47:12 crc kubenswrapper[4482]: I1125 06:47:12.626055 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:12 crc kubenswrapper[4482]: I1125 06:47:12.626888 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:12 crc kubenswrapper[4482]: I1125 06:47:12.626919 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:12 crc kubenswrapper[4482]: I1125 06:47:12.626931 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:13 crc kubenswrapper[4482]: I1125 06:47:13.924855 4482 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 06:47:13 crc kubenswrapper[4482]: I1125 06:47:13.925116 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 06:47:15 crc kubenswrapper[4482]: E1125 06:47:15.881943 4482 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 06:47:15 crc kubenswrapper[4482]: I1125 06:47:15.978087 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:15 crc kubenswrapper[4482]: I1125 06:47:15.978315 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:15 crc kubenswrapper[4482]: I1125 06:47:15.979319 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:15 crc kubenswrapper[4482]: I1125 06:47:15.979366 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:15 crc kubenswrapper[4482]: I1125 06:47:15.979381 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:15 crc kubenswrapper[4482]: I1125 06:47:15.983194 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:16 crc kubenswrapper[4482]: I1125 06:47:16.622917 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:16 crc kubenswrapper[4482]: I1125 06:47:16.884896 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:16 crc kubenswrapper[4482]: I1125 06:47:16.885696 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:16 crc kubenswrapper[4482]: I1125 06:47:16.885729 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:16 crc kubenswrapper[4482]: I1125 06:47:16.885739 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:16 crc kubenswrapper[4482]: I1125 06:47:16.888688 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:17 crc kubenswrapper[4482]: I1125 06:47:17.793072 4482 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 25 06:47:17 crc kubenswrapper[4482]: I1125 06:47:17.886919 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:17 crc kubenswrapper[4482]: I1125 06:47:17.888141 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:17 crc kubenswrapper[4482]: I1125 06:47:17.888218 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:17 crc kubenswrapper[4482]: I1125 06:47:17.888233 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:18 crc kubenswrapper[4482]: I1125 06:47:18.147770 4482 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 06:47:18 crc kubenswrapper[4482]: I1125 06:47:18.147846 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 06:47:18 crc kubenswrapper[4482]: I1125 06:47:18.150823 4482 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 06:47:18 crc kubenswrapper[4482]: I1125 06:47:18.150867 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 06:47:18 crc kubenswrapper[4482]: I1125 06:47:18.888769 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:18 crc kubenswrapper[4482]: I1125 06:47:18.889795 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:18 crc kubenswrapper[4482]: I1125 06:47:18.889837 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:18 crc kubenswrapper[4482]: I1125 06:47:18.889848 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:20 crc kubenswrapper[4482]: I1125 06:47:20.890715 4482 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 06:47:20 crc kubenswrapper[4482]: I1125 06:47:20.890788 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.105298 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.105454 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.105765 4482 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.105803 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.106415 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.106431 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.106458 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.108778 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.894001 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.894697 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.894800 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.894868 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.894835 4482 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 06:47:21 crc kubenswrapper[4482]: I1125 06:47:21.895213 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 06:47:22 crc kubenswrapper[4482]: I1125 06:47:22.329102 4482 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 06:47:22 crc kubenswrapper[4482]: I1125 06:47:22.329464 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 06:47:22 crc kubenswrapper[4482]: I1125 06:47:22.643788 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 25 06:47:22 crc kubenswrapper[4482]: I1125 06:47:22.643941 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:22 crc kubenswrapper[4482]: I1125 06:47:22.645111 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:22 crc kubenswrapper[4482]: I1125 06:47:22.645163 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:22 crc kubenswrapper[4482]: I1125 06:47:22.645211 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:22 crc kubenswrapper[4482]: I1125 06:47:22.652894 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 25 06:47:22 crc kubenswrapper[4482]: I1125 06:47:22.895336 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:22 crc kubenswrapper[4482]: I1125 06:47:22.899337 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:22 crc kubenswrapper[4482]: I1125 06:47:22.899375 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:22 crc kubenswrapper[4482]: I1125 06:47:22.899387 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.123941 4482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.125569 4482 trace.go:236] Trace[190038475]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 06:47:08.896) (total time: 14228ms): Nov 25 06:47:23 crc kubenswrapper[4482]: Trace[190038475]: ---"Objects listed" error: 14228ms (06:47:23.125) Nov 25 06:47:23 crc kubenswrapper[4482]: Trace[190038475]: [14.228716657s] [14.228716657s] END Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.125600 4482 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.126224 4482 trace.go:236] Trace[553712194]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 06:47:09.076) (total time: 14049ms): Nov 25 06:47:23 crc kubenswrapper[4482]: Trace[553712194]: ---"Objects listed" error: 14049ms (06:47:23.126) Nov 25 06:47:23 crc kubenswrapper[4482]: Trace[553712194]: [14.049379484s] [14.049379484s] END Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.126248 4482 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.128044 4482 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.128086 4482 trace.go:236] Trace[432279173]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 06:47:10.092) (total time: 13035ms): Nov 25 06:47:23 crc kubenswrapper[4482]: Trace[432279173]: ---"Objects listed" error: 13035ms (06:47:23.128) Nov 25 06:47:23 crc kubenswrapper[4482]: Trace[432279173]: [13.035957797s] [13.035957797s] END Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.128103 4482 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.128352 4482 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.128552 4482 trace.go:236] Trace[1232706754]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 06:47:08.852) (total time: 14275ms): Nov 25 06:47:23 crc kubenswrapper[4482]: Trace[1232706754]: ---"Objects listed" error: 14275ms (06:47:23.128) Nov 25 06:47:23 crc kubenswrapper[4482]: Trace[1232706754]: [14.275658779s] [14.275658779s] END Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.128579 4482 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.134480 4482 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.161399 4482 csr.go:261] certificate signing request csr-k2w78 is approved, waiting to be issued Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.169129 4482 csr.go:257] certificate signing request csr-k2w78 is issued Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.176373 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.186708 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.794192 4482 apiserver.go:52] "Watching apiserver" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.796640 4482 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.796934 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-xk9c4"] Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.797258 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.797326 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.797423 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.797464 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.797737 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.797879 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.797870 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.797929 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.798126 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xk9c4" Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.798520 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.800058 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.800260 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.800316 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.800882 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.800950 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.801044 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.801049 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.801323 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.801403 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.801413 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.801578 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.802378 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.813630 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.825336 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.833689 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.841813 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.862369 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.880156 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.894499 4482 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.898862 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.898928 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.900769 4482 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f" exitCode=255 Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.900839 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f"} Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.905887 4482 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.907761 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.907870 4482 scope.go:117] "RemoveContainer" containerID="5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.912812 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.925348 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.932965 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934233 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934265 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934283 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934300 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934316 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934331 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934346 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934361 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934400 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934417 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934445 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934460 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934475 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934491 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934506 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934520 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934550 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934548 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934568 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934588 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934601 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934598 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934614 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934616 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934715 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934737 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934759 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934781 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934800 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934816 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934834 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934851 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934869 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934887 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934902 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934917 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934934 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934949 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934966 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934979 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935011 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935029 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935069 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935086 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935115 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935128 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935147 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935161 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935199 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935213 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935227 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935241 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935258 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935274 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935359 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935378 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935396 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935410 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935423 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935438 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935456 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935471 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935489 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935502 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935517 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935567 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935582 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935597 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935616 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935632 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935649 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935667 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935700 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935715 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935739 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935771 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935788 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935804 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935820 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935835 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935849 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934781 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935870 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934830 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934923 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.934978 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935059 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935104 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935120 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935220 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935285 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935356 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935425 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935575 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935587 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935668 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936020 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936028 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935733 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935842 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935847 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936204 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936211 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936260 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936372 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936394 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936418 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936458 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936533 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936550 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936710 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936736 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.936809 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.937005 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.937082 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.937257 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.937277 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.937442 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.937572 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.937992 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.938121 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.938377 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.938388 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.938627 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.938747 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.938859 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.939105 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.938048 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:47:24.438030962 +0000 UTC m=+18.926262221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.938561 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.939482 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.939780 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.939829 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.939854 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.942287 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.942531 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.943282 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.943582 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.943653 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.935868 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.943836 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.943858 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.943877 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.943896 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.944090 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.944247 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.944392 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.944486 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.944637 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.944990 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945042 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945154 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945442 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945485 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945529 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945551 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945573 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945591 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945668 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945741 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945759 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945774 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945789 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945807 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945822 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945838 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945855 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945872 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945887 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945903 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945924 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945940 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945957 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945973 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.945989 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946003 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946011 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946052 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946074 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946091 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946109 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946126 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946141 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946158 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946212 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946232 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946249 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946269 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946284 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946299 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946315 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946330 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946346 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946360 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946374 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946387 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946401 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946415 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946430 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946444 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946461 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946474 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946488 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946506 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946521 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946538 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946553 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946569 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946582 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946598 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946616 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946631 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946645 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946658 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946685 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946702 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946716 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946731 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946746 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946782 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946799 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946813 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946829 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946843 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946873 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946887 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946902 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946917 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946931 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946946 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946961 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946976 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946992 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947240 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947260 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947277 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947292 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947307 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947322 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947337 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947353 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947369 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947383 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947398 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947414 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947429 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947446 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947462 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947477 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947492 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947507 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947522 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947538 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947553 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947568 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947624 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947646 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947661 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947691 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947705 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947720 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947736 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947751 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947765 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947786 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947806 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947821 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947859 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947884 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947901 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf2vd\" (UniqueName: \"kubernetes.io/projected/606a3794-ab1c-469d-b489-83811b456769-kube-api-access-tf2vd\") pod \"node-resolver-xk9c4\" (UID: \"606a3794-ab1c-469d-b489-83811b456769\") " pod="openshift-dns/node-resolver-xk9c4" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947918 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947933 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947951 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947967 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947984 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948000 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948015 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948032 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/606a3794-ab1c-469d-b489-83811b456769-hosts-file\") pod \"node-resolver-xk9c4\" (UID: \"606a3794-ab1c-469d-b489-83811b456769\") " pod="openshift-dns/node-resolver-xk9c4" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948051 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948067 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948086 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948104 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948122 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948214 4482 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948228 4482 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948237 4482 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948247 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948256 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948265 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948274 4482 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948283 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948292 4482 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948301 4482 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948309 4482 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948318 4482 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948326 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948335 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948344 4482 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948353 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948361 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948369 4482 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948381 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948390 4482 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948398 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948407 4482 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948384 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948416 4482 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948549 4482 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948561 4482 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948573 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948586 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948598 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948610 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948621 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948631 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948642 4482 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948650 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948660 4482 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.949856 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.949871 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.949882 4482 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.949892 4482 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.949950 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950002 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950012 4482 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950022 4482 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950030 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950149 4482 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950161 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950186 4482 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950233 4482 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950243 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950251 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950260 4482 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950269 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950278 4482 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950287 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950297 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950306 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950315 4482 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950324 4482 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950360 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950369 4482 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950378 4482 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950387 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950397 4482 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950407 4482 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950416 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950425 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950433 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950443 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950453 4482 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950463 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.950471 4482 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946133 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946305 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946588 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946875 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.946973 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947143 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947243 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947380 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947615 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.947530 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948053 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948088 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948091 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948279 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948433 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948609 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948663 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.948821 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.955409 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.955710 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.955958 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.956060 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.956150 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.956321 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.956413 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.956414 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.956625 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.956629 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.956648 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.956840 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.957200 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.958138 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.959119 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.960162 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.960296 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.960443 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.960643 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.960858 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.961189 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.961242 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.961612 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.961917 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.962224 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.962272 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.962401 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.959606 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.959849 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.962603 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.962865 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.962932 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.963014 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.963083 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.963110 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.963142 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.963825 4482 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.964777 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.965211 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.965472 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.965690 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.965693 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.966440 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.966562 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.966634 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.966721 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.965404 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.966930 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967017 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967051 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967122 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967122 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.965906 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967162 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967309 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967395 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967462 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967716 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967729 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967765 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967780 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967865 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.967941 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.968025 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.968069 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.968236 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.968561 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.968879 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.968953 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.969051 4482 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.969115 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:24.46909765 +0000 UTC m=+18.957328908 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.969255 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.969410 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.969516 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.969537 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.969587 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.969646 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.969687 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.969884 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.970297 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.970971 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.970982 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.971230 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.972475 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.972489 4482 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.976417 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:24.476396221 +0000 UTC m=+18.964627480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.971496 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.972857 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.972895 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.975200 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.975291 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.975496 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.975713 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.976039 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.975075 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.976679 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.976920 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.980257 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.979931 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.980963 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.981201 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.981895 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.982014 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.982102 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.982424 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.984685 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.984879 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.985367 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.985394 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.985407 4482 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.985453 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:24.485441296 +0000 UTC m=+18.973672555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.986122 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.986584 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.986707 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.985366 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.987588 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.988794 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.988816 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.988830 4482 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:23 crc kubenswrapper[4482]: E1125 06:47:23.988882 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:24.488864751 +0000 UTC m=+18.977096010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.988974 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.989152 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.997576 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 06:47:23 crc kubenswrapper[4482]: I1125 06:47:23.998132 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.000040 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.000946 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.002022 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.005643 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.012973 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.015890 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.017415 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.021483 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.034115 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.041698 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.051273 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.051442 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf2vd\" (UniqueName: \"kubernetes.io/projected/606a3794-ab1c-469d-b489-83811b456769-kube-api-access-tf2vd\") pod \"node-resolver-xk9c4\" (UID: \"606a3794-ab1c-469d-b489-83811b456769\") " pod="openshift-dns/node-resolver-xk9c4" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.051459 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.051495 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/606a3794-ab1c-469d-b489-83811b456769-hosts-file\") pod \"node-resolver-xk9c4\" (UID: \"606a3794-ab1c-469d-b489-83811b456769\") " pod="openshift-dns/node-resolver-xk9c4" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.051400 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.051697 4482 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.051788 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.051863 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.051945 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052028 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052107 4482 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052159 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.051702 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/606a3794-ab1c-469d-b489-83811b456769-hosts-file\") pod \"node-resolver-xk9c4\" (UID: \"606a3794-ab1c-469d-b489-83811b456769\") " pod="openshift-dns/node-resolver-xk9c4" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.051755 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052360 4482 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052442 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052495 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052566 4482 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052635 4482 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052721 4482 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052795 4482 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052846 4482 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052922 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.052991 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053041 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053109 4482 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053241 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053318 4482 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053391 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053440 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053511 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053584 4482 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053633 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053714 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053788 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053846 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053915 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.053988 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054037 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054103 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054163 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054252 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054302 4482 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054374 4482 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054422 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054488 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054534 4482 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054601 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054654 4482 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054740 4482 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054812 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054909 4482 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.054987 4482 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055037 4482 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055104 4482 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055151 4482 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055234 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055289 4482 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055344 4482 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055423 4482 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055483 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055530 4482 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055599 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055649 4482 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055720 4482 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055777 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055823 4482 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055873 4482 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055923 4482 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.055985 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056032 4482 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056081 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056128 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056332 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056391 4482 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056446 4482 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056494 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056543 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056593 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056643 4482 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056721 4482 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056774 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056828 4482 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056878 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056923 4482 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.056973 4482 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057022 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057068 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057117 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057194 4482 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057246 4482 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057298 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057344 4482 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057387 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057439 4482 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057488 4482 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057534 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057577 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057627 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057695 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057746 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057804 4482 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057857 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057902 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.057952 4482 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058007 4482 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058076 4482 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058125 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058207 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058257 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058314 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058362 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058414 4482 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058461 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058510 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058570 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058623 4482 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058683 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058730 4482 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058779 4482 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058823 4482 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058866 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058915 4482 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.058959 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.059014 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.059057 4482 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.059100 4482 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.059150 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.059228 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.059287 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.059336 4482 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.063582 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf2vd\" (UniqueName: \"kubernetes.io/projected/606a3794-ab1c-469d-b489-83811b456769-kube-api-access-tf2vd\") pod \"node-resolver-xk9c4\" (UID: \"606a3794-ab1c-469d-b489-83811b456769\") " pod="openshift-dns/node-resolver-xk9c4" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.108648 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.116625 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 06:47:24 crc kubenswrapper[4482]: W1125 06:47:24.116884 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-08f66a0632a7f3c10ac87b40cf345f2175d3d4be9389813a32b56674c28670ce WatchSource:0}: Error finding container 08f66a0632a7f3c10ac87b40cf345f2175d3d4be9389813a32b56674c28670ce: Status 404 returned error can't find the container with id 08f66a0632a7f3c10ac87b40cf345f2175d3d4be9389813a32b56674c28670ce Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.125528 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xk9c4" Nov 25 06:47:24 crc kubenswrapper[4482]: W1125 06:47:24.126649 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-260e5c756f1c495abc8a45011421044aa91f758c5234b1ac5a300f68d8976485 WatchSource:0}: Error finding container 260e5c756f1c495abc8a45011421044aa91f758c5234b1ac5a300f68d8976485: Status 404 returned error can't find the container with id 260e5c756f1c495abc8a45011421044aa91f758c5234b1ac5a300f68d8976485 Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.131973 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 06:47:24 crc kubenswrapper[4482]: W1125 06:47:24.153287 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-22882e1a57fdb06bd1f3f33cce3a050e94a20e0f3e3d5c0aeb5898a7acfac33f WatchSource:0}: Error finding container 22882e1a57fdb06bd1f3f33cce3a050e94a20e0f3e3d5c0aeb5898a7acfac33f: Status 404 returned error can't find the container with id 22882e1a57fdb06bd1f3f33cce3a050e94a20e0f3e3d5c0aeb5898a7acfac33f Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.170050 4482 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-25 06:42:23 +0000 UTC, rotation deadline is 2026-09-27 17:04:56.108812063 +0000 UTC Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.170102 4482 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7354h17m31.938711945s for next certificate rotation Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.462621 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.462801 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:47:25.462770142 +0000 UTC m=+19.951001401 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.563276 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.563323 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.563343 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.563361 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.563441 4482 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.563462 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.563481 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.563491 4482 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.563503 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.563455 4482 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.563519 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.563532 4482 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.563520 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:25.563501225 +0000 UTC m=+20.051732484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.563552 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:25.563545398 +0000 UTC m=+20.051776657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.563572 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:25.563566136 +0000 UTC m=+20.051797395 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:24 crc kubenswrapper[4482]: E1125 06:47:24.563593 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:25.563585184 +0000 UTC m=+20.051816442 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.904062 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"22882e1a57fdb06bd1f3f33cce3a050e94a20e0f3e3d5c0aeb5898a7acfac33f"} Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.905214 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba"} Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.905245 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12"} Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.905258 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"260e5c756f1c495abc8a45011421044aa91f758c5234b1ac5a300f68d8976485"} Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.906493 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c"} Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.906520 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"08f66a0632a7f3c10ac87b40cf345f2175d3d4be9389813a32b56674c28670ce"} Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.907504 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xk9c4" event={"ID":"606a3794-ab1c-469d-b489-83811b456769","Type":"ContainerStarted","Data":"a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8"} Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.907529 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xk9c4" event={"ID":"606a3794-ab1c-469d-b489-83811b456769","Type":"ContainerStarted","Data":"898c5dbbba5fd8ebdfa05462b91b7af80fad61643072ed1eb188c72b3aa6fcd5"} Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.909258 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.910970 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705"} Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.910999 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.916371 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:24Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.926208 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:24Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.934956 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:24Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.943751 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:24Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.953028 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:24Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.965512 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:24Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.975183 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:24Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.982843 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:24Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:24 crc kubenswrapper[4482]: I1125 06:47:24.991077 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:24Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.000009 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:24Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.014189 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.022053 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.029904 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.037464 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.046583 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.054893 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.061491 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.068924 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.468914 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.469098 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:47:27.469071027 +0000 UTC m=+21.957302287 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.570378 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.570423 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.570440 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.570464 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.570577 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.570593 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.570603 4482 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.570651 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:27.570638608 +0000 UTC m=+22.058869867 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.570715 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.570726 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.570733 4482 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.570754 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:27.570748104 +0000 UTC m=+22.058979363 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.570783 4482 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.570800 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:27.570795254 +0000 UTC m=+22.059026513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.570835 4482 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.570853 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:27.570848794 +0000 UTC m=+22.059080054 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.716293 4482 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.716495 4482 request.go:1255] Unexpected error when reading response body: read tcp 192.168.26.133:54232->192.168.26.133:6443: use of closed network connection Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.716555 4482 event.go:368] "Unable to write event (may retry after sleeping)" err="unexpected error when reading response body. Please retry. Original error: read tcp 192.168.26.133:54232->192.168.26.133:6443: use of closed network connection" event="&Event{ObjectMeta:{iptables-alerter-4ln5h.187b2d14a39c380f openshift-network-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-operator,Name:iptables-alerter-4ln5h,UID:d75a4c96-2883-4a0b-bab2-0fab2b6c0b49,APIVersion:v1,ResourceVersion:25146,FieldPath:spec.containers{iptables-alerter},},Reason:Started,Message:Started container iptables-alerter,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 06:47:25.713201167 +0000 UTC m=+20.201432426,LastTimestamp:2025-11-25 06:47:25.713201167 +0000 UTC m=+20.201432426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.830524 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.830597 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.830546 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.830693 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.830789 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.830859 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.833549 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.834021 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.834739 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.835294 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.835795 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.836255 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.836773 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.837266 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.837808 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.838263 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.838709 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.839267 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.839690 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.840127 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.840599 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.841048 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.841516 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.841889 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.843910 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.844885 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.845375 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.846522 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.847262 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.847730 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.848375 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.848997 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.850324 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.851409 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.851854 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.852738 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.853150 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.853621 4482 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.853731 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.855527 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.855699 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.855977 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.856741 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.858053 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.858657 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.859437 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.859980 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.860878 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.861327 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.862275 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.862822 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.862987 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.863693 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.864094 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.864972 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.865435 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.866381 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.866803 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.867539 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.867948 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.868385 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.869235 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.869672 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.872573 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.892700 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.913865 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98"} Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.914289 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.915532 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-p4qzz"] Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.915836 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:25 crc kubenswrapper[4482]: W1125 06:47:25.920962 4482 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.920995 4482 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.921125 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.921628 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dvpcl"] Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.922093 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.923147 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.923282 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.925939 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-b5qtx"] Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.926261 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b5qtx" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.926582 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 06:47:25 crc kubenswrapper[4482]: W1125 06:47:25.926762 4482 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 25 06:47:25 crc kubenswrapper[4482]: E1125 06:47:25.926791 4482 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.927111 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.927291 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.927359 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.927401 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.927620 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.935469 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.944606 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.961758 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.971401 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.981903 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:25 crc kubenswrapper[4482]: I1125 06:47:25.991330 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.003486 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.017247 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.029017 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.040612 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.050067 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.059345 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.068739 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075408 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-run-k8s-cni-cncf-io\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075460 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b267b2b-7642-40e7-985d-4f5d8cff541c-cni-binary-copy\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075481 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-multus-socket-dir-parent\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075498 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-var-lib-cni-multus\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075514 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1b267b2b-7642-40e7-985d-4f5d8cff541c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075536 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-system-cni-dir\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075554 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-run-netns\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075573 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-var-lib-kubelet\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075591 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-multus-conf-dir\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075718 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b267b2b-7642-40e7-985d-4f5d8cff541c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075743 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/46a7d6ef-c931-4f15-893b-c9436d6de1f5-rootfs\") pod \"machine-config-daemon-p4qzz\" (UID: \"46a7d6ef-c931-4f15-893b-c9436d6de1f5\") " pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075763 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b267b2b-7642-40e7-985d-4f5d8cff541c-cnibin\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075781 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b267b2b-7642-40e7-985d-4f5d8cff541c-os-release\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075797 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-hostroot\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075813 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b267b2b-7642-40e7-985d-4f5d8cff541c-system-cni-dir\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075846 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nsxt\" (UniqueName: \"kubernetes.io/projected/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-kube-api-access-2nsxt\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075860 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46a7d6ef-c931-4f15-893b-c9436d6de1f5-proxy-tls\") pod \"machine-config-daemon-p4qzz\" (UID: \"46a7d6ef-c931-4f15-893b-c9436d6de1f5\") " pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075879 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-multus-daemon-config\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075897 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntjhb\" (UniqueName: \"kubernetes.io/projected/1b267b2b-7642-40e7-985d-4f5d8cff541c-kube-api-access-ntjhb\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075914 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhnvq\" (UniqueName: \"kubernetes.io/projected/46a7d6ef-c931-4f15-893b-c9436d6de1f5-kube-api-access-vhnvq\") pod \"machine-config-daemon-p4qzz\" (UID: \"46a7d6ef-c931-4f15-893b-c9436d6de1f5\") " pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.075946 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-cnibin\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.076332 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-os-release\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.076352 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-var-lib-cni-bin\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.076369 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-run-multus-certs\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.076387 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-cni-binary-copy\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.076404 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-etc-kubernetes\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.076430 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46a7d6ef-c931-4f15-893b-c9436d6de1f5-mcd-auth-proxy-config\") pod \"machine-config-daemon-p4qzz\" (UID: \"46a7d6ef-c931-4f15-893b-c9436d6de1f5\") " pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.076644 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-multus-cni-dir\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.095291 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.117751 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.134564 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.177964 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b267b2b-7642-40e7-985d-4f5d8cff541c-cni-binary-copy\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178007 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-multus-socket-dir-parent\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178024 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-var-lib-cni-multus\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178041 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1b267b2b-7642-40e7-985d-4f5d8cff541c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178066 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-system-cni-dir\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178080 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-run-netns\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178093 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-var-lib-kubelet\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178106 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-multus-conf-dir\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178130 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b267b2b-7642-40e7-985d-4f5d8cff541c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178146 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/46a7d6ef-c931-4f15-893b-c9436d6de1f5-rootfs\") pod \"machine-config-daemon-p4qzz\" (UID: \"46a7d6ef-c931-4f15-893b-c9436d6de1f5\") " pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178160 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b267b2b-7642-40e7-985d-4f5d8cff541c-os-release\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178193 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b267b2b-7642-40e7-985d-4f5d8cff541c-cnibin\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178208 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-hostroot\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178225 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b267b2b-7642-40e7-985d-4f5d8cff541c-system-cni-dir\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178241 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nsxt\" (UniqueName: \"kubernetes.io/projected/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-kube-api-access-2nsxt\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178255 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46a7d6ef-c931-4f15-893b-c9436d6de1f5-proxy-tls\") pod \"machine-config-daemon-p4qzz\" (UID: \"46a7d6ef-c931-4f15-893b-c9436d6de1f5\") " pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178271 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-multus-daemon-config\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178287 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntjhb\" (UniqueName: \"kubernetes.io/projected/1b267b2b-7642-40e7-985d-4f5d8cff541c-kube-api-access-ntjhb\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178301 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhnvq\" (UniqueName: \"kubernetes.io/projected/46a7d6ef-c931-4f15-893b-c9436d6de1f5-kube-api-access-vhnvq\") pod \"machine-config-daemon-p4qzz\" (UID: \"46a7d6ef-c931-4f15-893b-c9436d6de1f5\") " pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178317 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-run-multus-certs\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178331 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-cnibin\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178346 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-os-release\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178360 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-var-lib-cni-bin\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178374 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46a7d6ef-c931-4f15-893b-c9436d6de1f5-mcd-auth-proxy-config\") pod \"machine-config-daemon-p4qzz\" (UID: \"46a7d6ef-c931-4f15-893b-c9436d6de1f5\") " pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178388 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-cni-binary-copy\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178382 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-hostroot\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178440 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-etc-kubernetes\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178393 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-multus-conf-dir\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178477 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-multus-socket-dir-parent\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178539 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-var-lib-cni-multus\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178767 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-run-netns\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178837 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-system-cni-dir\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178881 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-cnibin\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178843 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-run-multus-certs\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178924 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b267b2b-7642-40e7-985d-4f5d8cff541c-system-cni-dir\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178952 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-var-lib-kubelet\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179028 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-var-lib-cni-bin\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179060 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-os-release\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179094 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-multus-daemon-config\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179106 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b267b2b-7642-40e7-985d-4f5d8cff541c-cni-binary-copy\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179152 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b267b2b-7642-40e7-985d-4f5d8cff541c-os-release\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.178403 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-etc-kubernetes\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179202 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b267b2b-7642-40e7-985d-4f5d8cff541c-cnibin\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179245 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-multus-cni-dir\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179268 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-run-k8s-cni-cncf-io\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179319 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1b267b2b-7642-40e7-985d-4f5d8cff541c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179350 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-multus-cni-dir\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179362 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-host-run-k8s-cni-cncf-io\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179423 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/46a7d6ef-c931-4f15-893b-c9436d6de1f5-rootfs\") pod \"machine-config-daemon-p4qzz\" (UID: \"46a7d6ef-c931-4f15-893b-c9436d6de1f5\") " pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179423 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b267b2b-7642-40e7-985d-4f5d8cff541c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179614 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46a7d6ef-c931-4f15-893b-c9436d6de1f5-mcd-auth-proxy-config\") pod \"machine-config-daemon-p4qzz\" (UID: \"46a7d6ef-c931-4f15-893b-c9436d6de1f5\") " pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.179640 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-cni-binary-copy\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.183503 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46a7d6ef-c931-4f15-893b-c9436d6de1f5-proxy-tls\") pod \"machine-config-daemon-p4qzz\" (UID: \"46a7d6ef-c931-4f15-893b-c9436d6de1f5\") " pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.191416 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntjhb\" (UniqueName: \"kubernetes.io/projected/1b267b2b-7642-40e7-985d-4f5d8cff541c-kube-api-access-ntjhb\") pod \"multus-additional-cni-plugins-dvpcl\" (UID: \"1b267b2b-7642-40e7-985d-4f5d8cff541c\") " pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.202634 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nsxt\" (UniqueName: \"kubernetes.io/projected/2384eec7-0cd1-4bc5-9bc7-b5bb42607c37-kube-api-access-2nsxt\") pod \"multus-b5qtx\" (UID: \"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\") " pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.208732 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhnvq\" (UniqueName: \"kubernetes.io/projected/46a7d6ef-c931-4f15-893b-c9436d6de1f5-kube-api-access-vhnvq\") pod \"machine-config-daemon-p4qzz\" (UID: \"46a7d6ef-c931-4f15-893b-c9436d6de1f5\") " pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.238047 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b5qtx" Nov 25 06:47:26 crc kubenswrapper[4482]: W1125 06:47:26.249808 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2384eec7_0cd1_4bc5_9bc7_b5bb42607c37.slice/crio-84c65801b5ff36bc1ffa9790cc634fb000b6124bab2128b0d4a7be30c9a81ea8 WatchSource:0}: Error finding container 84c65801b5ff36bc1ffa9790cc634fb000b6124bab2128b0d4a7be30c9a81ea8: Status 404 returned error can't find the container with id 84c65801b5ff36bc1ffa9790cc634fb000b6124bab2128b0d4a7be30c9a81ea8 Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.284971 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c58dr"] Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.285780 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.288257 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.288510 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.288688 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.288858 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.288965 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.289028 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.290005 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.298130 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.308387 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.317365 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.328136 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.329075 4482 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.331000 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.331050 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.331060 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.331200 4482 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.336561 4482 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.336750 4482 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.337579 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.337616 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.337627 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.337638 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.337649 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:26Z","lastTransitionTime":"2025-11-25T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.346612 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: E1125 06:47:26.353761 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.356336 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.356433 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.356513 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.356584 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.356563 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.356643 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:26Z","lastTransitionTime":"2025-11-25T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:26 crc kubenswrapper[4482]: E1125 06:47:26.365445 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.367510 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.368409 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.368442 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.368452 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.368466 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.368475 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:26Z","lastTransitionTime":"2025-11-25T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.379638 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: E1125 06:47:26.381071 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.381689 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovnkube-script-lib\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.381769 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmswc\" (UniqueName: \"kubernetes.io/projected/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-kube-api-access-cmswc\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.381826 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-kubelet\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.381846 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-run-ovn-kubernetes\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.381884 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovn-node-metrics-cert\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.381919 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-systemd\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.381936 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovnkube-config\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.381971 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-ovn\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.381997 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-systemd-units\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.382032 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-etc-openvswitch\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.382050 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-openvswitch\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.382070 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-cni-netd\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.382085 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.382116 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-env-overrides\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.382131 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-run-netns\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.382192 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-cni-bin\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.382275 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-slash\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.382308 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-log-socket\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.382335 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-var-lib-openvswitch\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.382354 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-node-log\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.384013 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.384044 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.384053 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.384070 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.384079 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:26Z","lastTransitionTime":"2025-11-25T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.388450 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: E1125 06:47:26.392265 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.395042 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.395071 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.395081 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.395094 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.395104 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:26Z","lastTransitionTime":"2025-11-25T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.396642 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: E1125 06:47:26.403698 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: E1125 06:47:26.403933 4482 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.405067 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.405097 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.405106 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.405118 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.405125 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:26Z","lastTransitionTime":"2025-11-25T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.405788 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.413151 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.421776 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483667 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-cni-netd\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483714 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-run-netns\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483741 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483758 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-env-overrides\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483777 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-cni-netd\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483798 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-slash\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483830 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-log-socket\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483838 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-slash\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483847 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-cni-bin\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483868 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-run-netns\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483872 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-var-lib-openvswitch\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483889 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-node-log\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483895 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483912 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-kubelet\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483929 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovnkube-script-lib\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483942 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmswc\" (UniqueName: \"kubernetes.io/projected/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-kube-api-access-cmswc\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483957 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-run-ovn-kubernetes\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483973 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovn-node-metrics-cert\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.483992 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-systemd\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484007 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovnkube-config\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484023 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-ovn\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484062 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-systemd-units\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484077 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-etc-openvswitch\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484092 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-openvswitch\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484132 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-openvswitch\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484191 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-log-socket\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484214 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-cni-bin\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484232 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-var-lib-openvswitch\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484250 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-node-log\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484268 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-systemd\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484363 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-env-overrides\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484534 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-kubelet\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484592 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-run-ovn-kubernetes\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484620 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-systemd-units\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484642 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-etc-openvswitch\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484644 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-ovn\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484765 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovnkube-config\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.484790 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovnkube-script-lib\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.487014 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovn-node-metrics-cert\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.496510 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmswc\" (UniqueName: \"kubernetes.io/projected/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-kube-api-access-cmswc\") pod \"ovnkube-node-c58dr\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.507010 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.507038 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.507048 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.507062 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.507071 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:26Z","lastTransitionTime":"2025-11-25T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.599420 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:26 crc kubenswrapper[4482]: W1125 06:47:26.607832 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee3c4ba_b1ee_4c31_8b39_8ed3d9e3945e.slice/crio-8317eb65a578765ad8e6efac8534606f8308dfb43abd7ed228d49453c4703aab WatchSource:0}: Error finding container 8317eb65a578765ad8e6efac8534606f8308dfb43abd7ed228d49453c4703aab: Status 404 returned error can't find the container with id 8317eb65a578765ad8e6efac8534606f8308dfb43abd7ed228d49453c4703aab Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.608595 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.608690 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.609280 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.609344 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.609401 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:26Z","lastTransitionTime":"2025-11-25T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.711858 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.711899 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.711911 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.711925 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.711934 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:26Z","lastTransitionTime":"2025-11-25T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.814052 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.814088 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.814097 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.814111 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.814120 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:26Z","lastTransitionTime":"2025-11-25T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.915605 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.915633 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.915642 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.915653 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.915662 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:26Z","lastTransitionTime":"2025-11-25T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.917443 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546" exitCode=0 Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.917522 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.917572 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerStarted","Data":"8317eb65a578765ad8e6efac8534606f8308dfb43abd7ed228d49453c4703aab"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.918599 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b5qtx" event={"ID":"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37","Type":"ContainerStarted","Data":"c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.918621 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b5qtx" event={"ID":"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37","Type":"ContainerStarted","Data":"84c65801b5ff36bc1ffa9790cc634fb000b6124bab2128b0d4a7be30c9a81ea8"} Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.928961 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.938883 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.953482 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.963235 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.972844 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.982062 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:26 crc kubenswrapper[4482]: I1125 06:47:26.992237 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.000423 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.009267 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.018008 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.018038 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.018049 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.018064 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.018073 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:27Z","lastTransitionTime":"2025-11-25T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.023248 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.034894 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.044720 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.059939 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.070842 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.082529 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.091970 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.107162 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.111739 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.111773 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.112935 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.116326 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.120587 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.122821 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.122845 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.122853 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.122867 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.122875 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:27Z","lastTransitionTime":"2025-11-25T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:27 crc kubenswrapper[4482]: W1125 06:47:27.126001 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b267b2b_7642_40e7_985d_4f5d8cff541c.slice/crio-0e2839cff8ddffd398b5a8b1ea8d452c5e53c2de5b10baa3e1e4ac47c8391b6d WatchSource:0}: Error finding container 0e2839cff8ddffd398b5a8b1ea8d452c5e53c2de5b10baa3e1e4ac47c8391b6d: Status 404 returned error can't find the container with id 0e2839cff8ddffd398b5a8b1ea8d452c5e53c2de5b10baa3e1e4ac47c8391b6d Nov 25 06:47:27 crc kubenswrapper[4482]: W1125 06:47:27.129031 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a7d6ef_c931_4f15_893b_c9436d6de1f5.slice/crio-cfb46b5a4523d1492c135941d87f5e817755413aa4631f46175c6756edd055b1 WatchSource:0}: Error finding container cfb46b5a4523d1492c135941d87f5e817755413aa4631f46175c6756edd055b1: Status 404 returned error can't find the container with id cfb46b5a4523d1492c135941d87f5e817755413aa4631f46175c6756edd055b1 Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.134501 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.151043 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.170443 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.180599 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.190077 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.201571 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.210724 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.220109 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.224500 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.224528 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.224540 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.224554 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.224563 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:27Z","lastTransitionTime":"2025-11-25T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.326967 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.327003 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.327012 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.327026 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.327035 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:27Z","lastTransitionTime":"2025-11-25T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.429384 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.429437 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.429448 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.429469 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.429480 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:27Z","lastTransitionTime":"2025-11-25T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.491864 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.492054 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:47:31.492034308 +0000 UTC m=+25.980265567 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.531434 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.531479 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.531509 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.531534 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.531545 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:27Z","lastTransitionTime":"2025-11-25T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.592948 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.592990 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.593013 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.593049 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.593129 4482 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.593194 4482 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.593199 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.593261 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:31.593222052 +0000 UTC m=+26.081453302 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.593272 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.593272 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.593319 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.593332 4482 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.593280 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:31.593272737 +0000 UTC m=+26.081503997 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.593285 4482 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.593426 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:31.593408975 +0000 UTC m=+26.081640234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.593489 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:31.593471873 +0000 UTC m=+26.081703132 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.633782 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.633830 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.633842 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.633861 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.633872 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:27Z","lastTransitionTime":"2025-11-25T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.735772 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.735806 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.735821 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.735836 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.735846 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:27Z","lastTransitionTime":"2025-11-25T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.830391 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.830936 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.831048 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.831101 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.831400 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:27 crc kubenswrapper[4482]: E1125 06:47:27.831576 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.837965 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.838015 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.838028 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.838045 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.838057 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:27Z","lastTransitionTime":"2025-11-25T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.926900 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerStarted","Data":"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.929396 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerStarted","Data":"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.929535 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerStarted","Data":"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.929600 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerStarted","Data":"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.929656 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerStarted","Data":"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.929722 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerStarted","Data":"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.929799 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.929876 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.929956 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"cfb46b5a4523d1492c135941d87f5e817755413aa4631f46175c6756edd055b1"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.930044 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" event={"ID":"1b267b2b-7642-40e7-985d-4f5d8cff541c","Type":"ContainerDied","Data":"22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.929742 4482 generic.go:334] "Generic (PLEG): container finished" podID="1b267b2b-7642-40e7-985d-4f5d8cff541c" containerID="22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1" exitCode=0 Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.930200 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" event={"ID":"1b267b2b-7642-40e7-985d-4f5d8cff541c","Type":"ContainerStarted","Data":"0e2839cff8ddffd398b5a8b1ea8d452c5e53c2de5b10baa3e1e4ac47c8391b6d"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.939453 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.939487 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.939499 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.939514 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.939526 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:27Z","lastTransitionTime":"2025-11-25T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.949124 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.967467 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.979767 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:27 crc kubenswrapper[4482]: I1125 06:47:27.996761 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.005581 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.016697 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.026948 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.038094 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.041241 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.041282 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.041294 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.041311 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.041323 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:28Z","lastTransitionTime":"2025-11-25T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.050425 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.058613 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.068356 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.075204 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.083608 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.095008 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.105346 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.114554 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.123371 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.134481 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.142782 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.143392 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.143431 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.143441 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.143456 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.143474 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:28Z","lastTransitionTime":"2025-11-25T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.154000 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.163865 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.174621 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.183813 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.199479 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.208521 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.219267 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.246797 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.246849 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.246860 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.246879 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.246890 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:28Z","lastTransitionTime":"2025-11-25T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.349259 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.349305 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.349315 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.349336 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.349352 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:28Z","lastTransitionTime":"2025-11-25T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.451185 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.451224 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.451235 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.451250 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.451258 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:28Z","lastTransitionTime":"2025-11-25T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.553547 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.553582 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.553591 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.553603 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.553611 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:28Z","lastTransitionTime":"2025-11-25T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.655434 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.655463 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.655472 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.655485 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.655492 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:28Z","lastTransitionTime":"2025-11-25T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.757870 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.757900 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.757910 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.757920 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.757930 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:28Z","lastTransitionTime":"2025-11-25T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.860154 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.860202 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.860212 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.860222 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.860231 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:28Z","lastTransitionTime":"2025-11-25T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.934716 4482 generic.go:334] "Generic (PLEG): container finished" podID="1b267b2b-7642-40e7-985d-4f5d8cff541c" containerID="8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5" exitCode=0 Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.934823 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" event={"ID":"1b267b2b-7642-40e7-985d-4f5d8cff541c","Type":"ContainerDied","Data":"8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5"} Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.945004 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.955924 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.961607 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.961640 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.961651 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.961664 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.961673 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:28Z","lastTransitionTime":"2025-11-25T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.969893 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.983324 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:28 crc kubenswrapper[4482]: I1125 06:47:28.993406 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.005053 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.017718 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.026494 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.035675 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.044423 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.053153 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.062550 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.064833 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.064860 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.064868 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.064882 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.064895 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:29Z","lastTransitionTime":"2025-11-25T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.073518 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.167227 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.167269 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.167280 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.167298 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.167307 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:29Z","lastTransitionTime":"2025-11-25T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.269110 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.269143 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.269152 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.269184 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.269193 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:29Z","lastTransitionTime":"2025-11-25T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.370967 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.370997 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.371006 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.371019 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.371030 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:29Z","lastTransitionTime":"2025-11-25T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.473287 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.473690 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.473759 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.473831 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.473899 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:29Z","lastTransitionTime":"2025-11-25T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.510047 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-m5qcx"] Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.510640 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m5qcx" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.512816 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.512911 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.513322 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.514215 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.522259 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.532054 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.544919 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.554214 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.565197 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.575650 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.576562 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.576611 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.576625 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.576645 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.576656 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:29Z","lastTransitionTime":"2025-11-25T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.583532 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.593060 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.601831 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.609821 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/371864cf-3771-4348-9e81-929eee585f98-serviceca\") pod \"node-ca-m5qcx\" (UID: \"371864cf-3771-4348-9e81-929eee585f98\") " pod="openshift-image-registry/node-ca-m5qcx" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.609918 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djzlr\" (UniqueName: \"kubernetes.io/projected/371864cf-3771-4348-9e81-929eee585f98-kube-api-access-djzlr\") pod \"node-ca-m5qcx\" (UID: \"371864cf-3771-4348-9e81-929eee585f98\") " pod="openshift-image-registry/node-ca-m5qcx" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.609953 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/371864cf-3771-4348-9e81-929eee585f98-host\") pod \"node-ca-m5qcx\" (UID: \"371864cf-3771-4348-9e81-929eee585f98\") " pod="openshift-image-registry/node-ca-m5qcx" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.613442 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.623150 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.631649 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.640763 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.650946 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.678862 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.678908 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.678920 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.678936 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.678948 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:29Z","lastTransitionTime":"2025-11-25T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.711272 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djzlr\" (UniqueName: \"kubernetes.io/projected/371864cf-3771-4348-9e81-929eee585f98-kube-api-access-djzlr\") pod \"node-ca-m5qcx\" (UID: \"371864cf-3771-4348-9e81-929eee585f98\") " pod="openshift-image-registry/node-ca-m5qcx" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.711309 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/371864cf-3771-4348-9e81-929eee585f98-host\") pod \"node-ca-m5qcx\" (UID: \"371864cf-3771-4348-9e81-929eee585f98\") " pod="openshift-image-registry/node-ca-m5qcx" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.711346 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/371864cf-3771-4348-9e81-929eee585f98-serviceca\") pod \"node-ca-m5qcx\" (UID: \"371864cf-3771-4348-9e81-929eee585f98\") " pod="openshift-image-registry/node-ca-m5qcx" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.711593 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/371864cf-3771-4348-9e81-929eee585f98-host\") pod \"node-ca-m5qcx\" (UID: \"371864cf-3771-4348-9e81-929eee585f98\") " pod="openshift-image-registry/node-ca-m5qcx" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.712184 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/371864cf-3771-4348-9e81-929eee585f98-serviceca\") pod \"node-ca-m5qcx\" (UID: \"371864cf-3771-4348-9e81-929eee585f98\") " pod="openshift-image-registry/node-ca-m5qcx" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.728846 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djzlr\" (UniqueName: \"kubernetes.io/projected/371864cf-3771-4348-9e81-929eee585f98-kube-api-access-djzlr\") pod \"node-ca-m5qcx\" (UID: \"371864cf-3771-4348-9e81-929eee585f98\") " pod="openshift-image-registry/node-ca-m5qcx" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.781690 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.781727 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.781736 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.781753 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.781765 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:29Z","lastTransitionTime":"2025-11-25T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.821558 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m5qcx" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.830733 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:29 crc kubenswrapper[4482]: E1125 06:47:29.830840 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.830976 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.831146 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:29 crc kubenswrapper[4482]: E1125 06:47:29.831225 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:29 crc kubenswrapper[4482]: E1125 06:47:29.831144 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.883505 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.883527 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.883534 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.883545 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.883553 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:29Z","lastTransitionTime":"2025-11-25T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.940820 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m5qcx" event={"ID":"371864cf-3771-4348-9e81-929eee585f98","Type":"ContainerStarted","Data":"985d4ac4c3d0a8acdb627a2bc68f2383193c24871d4b4be1630930a5e93cbb4b"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.944762 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerStarted","Data":"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.947757 4482 generic.go:334] "Generic (PLEG): container finished" podID="1b267b2b-7642-40e7-985d-4f5d8cff541c" containerID="982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311" exitCode=0 Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.947800 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" event={"ID":"1b267b2b-7642-40e7-985d-4f5d8cff541c","Type":"ContainerDied","Data":"982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.958348 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.967761 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.977671 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.985713 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.985740 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.985749 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.985762 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.985772 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:29Z","lastTransitionTime":"2025-11-25T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:29 crc kubenswrapper[4482]: I1125 06:47:29.988559 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:29Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.003666 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.015787 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.027849 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.037101 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.047062 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.056453 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.067574 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.079704 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.089221 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.089249 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.089259 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.089275 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.089285 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:30Z","lastTransitionTime":"2025-11-25T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.093971 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.104767 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.191593 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.191624 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.191635 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.191650 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.191661 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:30Z","lastTransitionTime":"2025-11-25T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.294284 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.294331 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.294343 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.294363 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.294374 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:30Z","lastTransitionTime":"2025-11-25T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.396414 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.396444 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.396452 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.396466 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.396476 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:30Z","lastTransitionTime":"2025-11-25T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.497955 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.497979 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.497989 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.498003 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.498014 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:30Z","lastTransitionTime":"2025-11-25T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.612878 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.612910 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.612918 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.612930 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.612937 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:30Z","lastTransitionTime":"2025-11-25T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.715852 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.715878 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.715887 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.715900 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.715908 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:30Z","lastTransitionTime":"2025-11-25T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.819793 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.819840 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.819853 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.819870 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.819884 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:30Z","lastTransitionTime":"2025-11-25T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.922420 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.922458 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.922466 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.922481 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.922491 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:30Z","lastTransitionTime":"2025-11-25T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.953821 4482 generic.go:334] "Generic (PLEG): container finished" podID="1b267b2b-7642-40e7-985d-4f5d8cff541c" containerID="7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3" exitCode=0 Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.953901 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" event={"ID":"1b267b2b-7642-40e7-985d-4f5d8cff541c","Type":"ContainerDied","Data":"7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3"} Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.955657 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m5qcx" event={"ID":"371864cf-3771-4348-9e81-929eee585f98","Type":"ContainerStarted","Data":"34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355"} Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.966713 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.976356 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:30 crc kubenswrapper[4482]: I1125 06:47:30.991808 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:30Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.001648 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.010494 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.019186 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.024371 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.024406 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.024418 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.024431 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.024441 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:31Z","lastTransitionTime":"2025-11-25T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.031517 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.051183 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.094435 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.109340 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.122079 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.127368 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.127404 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.127416 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.127434 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.127446 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:31Z","lastTransitionTime":"2025-11-25T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.136434 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.146777 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.157254 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.167775 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.175639 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.185812 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.199657 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.209564 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.220078 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.229221 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.229522 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.229552 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.229561 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.229578 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.229586 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:31Z","lastTransitionTime":"2025-11-25T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.239403 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.248760 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.258822 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.266296 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.275243 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.283903 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.291967 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.331893 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.331930 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.331940 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.331956 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.331965 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:31Z","lastTransitionTime":"2025-11-25T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.434389 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.434432 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.434442 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.434458 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.434468 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:31Z","lastTransitionTime":"2025-11-25T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.526910 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.527076 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:47:39.527051659 +0000 UTC m=+34.015282918 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.536818 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.536851 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.536861 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.536878 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.536887 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:31Z","lastTransitionTime":"2025-11-25T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.628418 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.628448 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.628468 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.628501 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.628586 4482 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.628599 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.628614 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.628625 4482 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.628640 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:39.628627193 +0000 UTC m=+34.116858453 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.628654 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:39.628648915 +0000 UTC m=+34.116880174 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.628669 4482 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.628706 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:39.628697917 +0000 UTC m=+34.116929177 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.628742 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.628790 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.628806 4482 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.628887 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:39.628859573 +0000 UTC m=+34.117090832 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.638730 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.638771 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.638782 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.638799 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.638808 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:31Z","lastTransitionTime":"2025-11-25T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.740777 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.740998 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.741009 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.741022 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.741031 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:31Z","lastTransitionTime":"2025-11-25T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.829847 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.829937 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.829850 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.829978 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.830095 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:31 crc kubenswrapper[4482]: E1125 06:47:31.830230 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.843044 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.843086 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.843096 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.843112 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.843122 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:31Z","lastTransitionTime":"2025-11-25T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.945067 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.945100 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.945109 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.945123 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.945132 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:31Z","lastTransitionTime":"2025-11-25T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.962615 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerStarted","Data":"8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41"} Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.962942 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.965796 4482 generic.go:334] "Generic (PLEG): container finished" podID="1b267b2b-7642-40e7-985d-4f5d8cff541c" containerID="add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2" exitCode=0 Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.965884 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" event={"ID":"1b267b2b-7642-40e7-985d-4f5d8cff541c","Type":"ContainerDied","Data":"add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2"} Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.986146 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.998712 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:31Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:31 crc kubenswrapper[4482]: I1125 06:47:31.999526 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.012515 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.024897 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.041065 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.046732 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.046788 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.046798 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.046813 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.046826 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:32Z","lastTransitionTime":"2025-11-25T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.051379 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.065121 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.074622 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.085803 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.097194 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.108673 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.120456 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.128422 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.141826 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.148961 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.148991 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.149000 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.149017 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.149032 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:32Z","lastTransitionTime":"2025-11-25T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.153080 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.161201 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.180911 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.191357 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.201918 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.210193 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.222232 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.230823 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.239872 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.246275 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.250918 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.250946 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.250956 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.250968 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.250978 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:32Z","lastTransitionTime":"2025-11-25T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.254723 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.262693 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.270436 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.277597 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.353548 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.353874 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.353885 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.353903 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.353914 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:32Z","lastTransitionTime":"2025-11-25T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.456231 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.456263 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.456273 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.456290 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.456299 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:32Z","lastTransitionTime":"2025-11-25T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.558468 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.558504 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.558516 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.558529 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.558538 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:32Z","lastTransitionTime":"2025-11-25T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.660853 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.660889 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.660901 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.660915 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.660923 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:32Z","lastTransitionTime":"2025-11-25T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.762454 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.762490 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.762502 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.762517 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.762526 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:32Z","lastTransitionTime":"2025-11-25T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.864567 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.864604 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.864613 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.864627 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.864636 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:32Z","lastTransitionTime":"2025-11-25T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.966588 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.966628 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.966637 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.966655 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.966668 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:32Z","lastTransitionTime":"2025-11-25T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.971014 4482 generic.go:334] "Generic (PLEG): container finished" podID="1b267b2b-7642-40e7-985d-4f5d8cff541c" containerID="6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7" exitCode=0 Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.971075 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" event={"ID":"1b267b2b-7642-40e7-985d-4f5d8cff541c","Type":"ContainerDied","Data":"6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7"} Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.971127 4482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.971500 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.994197 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:32 crc kubenswrapper[4482]: I1125 06:47:32.997854 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:32Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.015908 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.033746 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.047928 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.066228 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.068852 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.068883 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.068894 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.068910 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.068923 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:33Z","lastTransitionTime":"2025-11-25T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.076275 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.088364 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.105822 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.116853 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.128159 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.142853 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.153884 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.163363 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.171052 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.171093 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.171102 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.171116 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.171125 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:33Z","lastTransitionTime":"2025-11-25T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.172043 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.182049 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.191461 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.202823 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.212890 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.225802 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.235526 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.247073 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.255189 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.264610 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.274212 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.274253 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.274264 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.274284 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.274298 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:33Z","lastTransitionTime":"2025-11-25T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.274627 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.283386 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.293179 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.300379 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.309079 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.377187 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.377237 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.377251 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.377272 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.377286 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:33Z","lastTransitionTime":"2025-11-25T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.479364 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.479405 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.479415 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.479431 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.479441 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:33Z","lastTransitionTime":"2025-11-25T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.581893 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.581932 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.581941 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.581954 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.581964 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:33Z","lastTransitionTime":"2025-11-25T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.683761 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.683802 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.683810 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.683824 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.683838 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:33Z","lastTransitionTime":"2025-11-25T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.786238 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.786273 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.786282 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.786296 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.786306 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:33Z","lastTransitionTime":"2025-11-25T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.830664 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:33 crc kubenswrapper[4482]: E1125 06:47:33.830776 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.831054 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:33 crc kubenswrapper[4482]: E1125 06:47:33.831364 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.831450 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:33 crc kubenswrapper[4482]: E1125 06:47:33.831517 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.887568 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.887599 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.887611 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.887624 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.887633 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:33Z","lastTransitionTime":"2025-11-25T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.974683 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/0.log" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.976769 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41" exitCode=1 Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.976827 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41"} Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.977385 4482 scope.go:117] "RemoveContainer" containerID="8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.984265 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" event={"ID":"1b267b2b-7642-40e7-985d-4f5d8cff541c","Type":"ContainerStarted","Data":"e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e"} Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.989236 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.989370 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.989567 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.989737 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.989890 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:33Z","lastTransitionTime":"2025-11-25T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:33 crc kubenswrapper[4482]: I1125 06:47:33.994151 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:33Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.004852 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.015920 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.029750 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"message\\\":\\\"l\\\\nI1125 06:47:33.510367 5682 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 06:47:33.510370 5682 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 06:47:33.510381 5682 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 06:47:33.510391 5682 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 06:47:33.510408 5682 factory.go:656] Stopping watch factory\\\\nI1125 06:47:33.510420 5682 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 06:47:33.510609 5682 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 06:47:33.510682 5682 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 06:47:33.510700 5682 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 06:47:33.510709 5682 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 06:47:33.509404 5682 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 06:47:33.510858 5682 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 06:47:33.510872 5682 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 06:47:33.510882 5682 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.039292 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.050410 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.063946 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.076192 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.090252 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.092699 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.092730 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.092741 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.092756 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.092768 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:34Z","lastTransitionTime":"2025-11-25T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.102952 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.110981 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.121020 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.129672 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.139110 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.148391 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.157230 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.166760 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.176661 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.183968 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.195119 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.195144 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.195153 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.195185 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.195201 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:34Z","lastTransitionTime":"2025-11-25T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.196085 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.206649 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.216927 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.226902 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.247015 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.263819 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"message\\\":\\\"l\\\\nI1125 06:47:33.510367 5682 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 06:47:33.510370 5682 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 06:47:33.510381 5682 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 06:47:33.510391 5682 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 06:47:33.510408 5682 factory.go:656] Stopping watch factory\\\\nI1125 06:47:33.510420 5682 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 06:47:33.510609 5682 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 06:47:33.510682 5682 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 06:47:33.510700 5682 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 06:47:33.510709 5682 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 06:47:33.509404 5682 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 06:47:33.510858 5682 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 06:47:33.510872 5682 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 06:47:33.510882 5682 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.278373 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.291626 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.297128 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.297186 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.297196 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.297210 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.297219 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:34Z","lastTransitionTime":"2025-11-25T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.303135 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.399391 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.399437 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.399446 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.399460 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.399472 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:34Z","lastTransitionTime":"2025-11-25T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.501185 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.501218 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.501226 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.501239 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.501247 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:34Z","lastTransitionTime":"2025-11-25T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.603716 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.603749 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.603757 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.603768 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.603776 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:34Z","lastTransitionTime":"2025-11-25T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.705754 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.705790 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.705799 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.705813 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.705821 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:34Z","lastTransitionTime":"2025-11-25T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.808306 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.808362 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.808373 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.808390 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.808399 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:34Z","lastTransitionTime":"2025-11-25T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.910538 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.910572 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.910581 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.910595 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.910602 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:34Z","lastTransitionTime":"2025-11-25T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.989345 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/1.log" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.990190 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/0.log" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.992545 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4" exitCode=1 Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.992592 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4"} Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.992649 4482 scope.go:117] "RemoveContainer" containerID="8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41" Nov 25 06:47:34 crc kubenswrapper[4482]: I1125 06:47:34.993015 4482 scope.go:117] "RemoveContainer" containerID="d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4" Nov 25 06:47:34 crc kubenswrapper[4482]: E1125 06:47:34.993146 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.002135 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.011979 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.012008 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.012016 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.012028 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.012036 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:35Z","lastTransitionTime":"2025-11-25T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.016036 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.022492 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.030887 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.037903 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.044633 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.053010 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.059317 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.066634 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.073823 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.081324 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.088423 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.096057 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.107436 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"message\\\":\\\"l\\\\nI1125 06:47:33.510367 5682 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 06:47:33.510370 5682 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 06:47:33.510381 5682 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 06:47:33.510391 5682 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 06:47:33.510408 5682 factory.go:656] Stopping watch factory\\\\nI1125 06:47:33.510420 5682 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 06:47:33.510609 5682 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 06:47:33.510682 5682 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 06:47:33.510700 5682 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 06:47:33.510709 5682 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 06:47:33.509404 5682 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 06:47:33.510858 5682 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 06:47:33.510872 5682 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 06:47:33.510882 5682 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:34Z\\\",\\\"message\\\":\\\"ue, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 06:47:34.613970 5844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:47:34.613985 5844 services_controller.go:451] Built service openshift-dns/dns-default cluster-wide LB for network=default: []services.LB{}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.113572 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.113593 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.113602 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.113615 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.113623 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:35Z","lastTransitionTime":"2025-11-25T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.214958 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.214983 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.214990 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.215001 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.215009 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:35Z","lastTransitionTime":"2025-11-25T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.316265 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.316282 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.316290 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.316303 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.316310 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:35Z","lastTransitionTime":"2025-11-25T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.418100 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.418201 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.418257 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.418308 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.418367 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:35Z","lastTransitionTime":"2025-11-25T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.519913 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.520049 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.520106 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.520157 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.520230 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:35Z","lastTransitionTime":"2025-11-25T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.621721 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.621744 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.621752 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.621764 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.621771 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:35Z","lastTransitionTime":"2025-11-25T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.723803 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.724111 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.724221 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.724302 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.724358 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:35Z","lastTransitionTime":"2025-11-25T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.826588 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.826630 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.826641 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.826655 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.826665 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:35Z","lastTransitionTime":"2025-11-25T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.830507 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:35 crc kubenswrapper[4482]: E1125 06:47:35.830596 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.830872 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.830935 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:35 crc kubenswrapper[4482]: E1125 06:47:35.831190 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:35 crc kubenswrapper[4482]: E1125 06:47:35.831237 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.840900 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.850967 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.857582 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.866747 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.874374 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.882416 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.895356 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.902252 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.910880 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.919914 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.929858 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.929894 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.929905 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.929919 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.929928 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:35Z","lastTransitionTime":"2025-11-25T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.930981 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.939373 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.948311 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.960839 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d49f5daa232b8c42fe1b250cf8b1fc07740ef0caba10dd7cfb7304877e8ab41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"message\\\":\\\"l\\\\nI1125 06:47:33.510367 5682 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 06:47:33.510370 5682 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 06:47:33.510381 5682 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 06:47:33.510391 5682 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 06:47:33.510408 5682 factory.go:656] Stopping watch factory\\\\nI1125 06:47:33.510420 5682 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 06:47:33.510609 5682 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 06:47:33.510682 5682 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 06:47:33.510700 5682 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 06:47:33.510709 5682 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 06:47:33.509404 5682 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 06:47:33.510858 5682 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 06:47:33.510872 5682 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 06:47:33.510882 5682 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:34Z\\\",\\\"message\\\":\\\"ue, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 06:47:34.613970 5844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:47:34.613985 5844 services_controller.go:451] Built service openshift-dns/dns-default cluster-wide LB for network=default: []services.LB{}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:35 crc kubenswrapper[4482]: I1125 06:47:35.998801 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/1.log" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.001880 4482 scope.go:117] "RemoveContainer" containerID="d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4" Nov 25 06:47:36 crc kubenswrapper[4482]: E1125 06:47:36.002045 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.009820 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.018354 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.026532 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.033095 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.033133 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.033145 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.033180 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.033195 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.035286 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.049855 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:34Z\\\",\\\"message\\\":\\\"ue, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 06:47:34.613970 5844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:47:34.613985 5844 services_controller.go:451] Built service openshift-dns/dns-default cluster-wide LB for network=default: []services.LB{}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.059619 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.071338 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.078876 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.088160 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.096002 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.103561 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.113471 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.120747 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.128877 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.135536 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.135564 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.135573 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.135587 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.135599 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.238542 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.238569 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.238578 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.238589 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.238600 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.340742 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.340779 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.340788 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.340801 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.340812 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.442935 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.442976 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.442986 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.443001 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.443011 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.544767 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.544805 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.544814 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.544827 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.544837 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.646549 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.646583 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.646592 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.646607 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.646619 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.709811 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.709839 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.709848 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.709859 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.709868 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: E1125 06:47:36.718079 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.725640 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.725667 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.725676 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.725689 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.725708 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: E1125 06:47:36.734356 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.736867 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.736965 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.737018 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.737071 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.737137 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: E1125 06:47:36.745725 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.748810 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.748898 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.748975 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.749045 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.749096 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: E1125 06:47:36.757476 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.760248 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.760277 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.760288 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.760300 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.760309 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: E1125 06:47:36.768986 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:36Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:36 crc kubenswrapper[4482]: E1125 06:47:36.769094 4482 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.770262 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.770289 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.770298 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.770312 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.770321 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.872079 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.872344 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.872425 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.872524 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.872583 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.974924 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.975105 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.975163 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.975248 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:36 crc kubenswrapper[4482]: I1125 06:47:36.975319 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:36Z","lastTransitionTime":"2025-11-25T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.077353 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.077820 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.077881 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.077940 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.078017 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:37Z","lastTransitionTime":"2025-11-25T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.179360 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.179390 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.179400 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.179414 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.179426 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:37Z","lastTransitionTime":"2025-11-25T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.282375 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.282411 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.282420 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.282440 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.282450 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:37Z","lastTransitionTime":"2025-11-25T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.384677 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.384725 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.384735 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.384750 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.384759 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:37Z","lastTransitionTime":"2025-11-25T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.408916 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn"] Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.409397 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.411260 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.411625 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.421530 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.430062 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.439034 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.453021 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:34Z\\\",\\\"message\\\":\\\"ue, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 06:47:34.613970 5844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:47:34.613985 5844 services_controller.go:451] Built service openshift-dns/dns-default cluster-wide LB for network=default: []services.LB{}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.460998 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.469761 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.480540 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.486845 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.486877 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.486890 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.486908 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.486917 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:37Z","lastTransitionTime":"2025-11-25T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.487568 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.497963 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.506611 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.515676 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.523484 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.533191 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.540101 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.550633 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.580285 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2n2x\" (UniqueName: \"kubernetes.io/projected/9407ebd6-89eb-4522-81c8-b224bf948ba4-kube-api-access-j2n2x\") pod \"ovnkube-control-plane-749d76644c-qpxjn\" (UID: \"9407ebd6-89eb-4522-81c8-b224bf948ba4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.580325 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9407ebd6-89eb-4522-81c8-b224bf948ba4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qpxjn\" (UID: \"9407ebd6-89eb-4522-81c8-b224bf948ba4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.580458 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9407ebd6-89eb-4522-81c8-b224bf948ba4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qpxjn\" (UID: \"9407ebd6-89eb-4522-81c8-b224bf948ba4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.580563 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9407ebd6-89eb-4522-81c8-b224bf948ba4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qpxjn\" (UID: \"9407ebd6-89eb-4522-81c8-b224bf948ba4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.588646 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.588676 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.588686 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.588710 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.588718 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:37Z","lastTransitionTime":"2025-11-25T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.681524 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9407ebd6-89eb-4522-81c8-b224bf948ba4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qpxjn\" (UID: \"9407ebd6-89eb-4522-81c8-b224bf948ba4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.681693 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2n2x\" (UniqueName: \"kubernetes.io/projected/9407ebd6-89eb-4522-81c8-b224bf948ba4-kube-api-access-j2n2x\") pod \"ovnkube-control-plane-749d76644c-qpxjn\" (UID: \"9407ebd6-89eb-4522-81c8-b224bf948ba4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.681784 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9407ebd6-89eb-4522-81c8-b224bf948ba4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qpxjn\" (UID: \"9407ebd6-89eb-4522-81c8-b224bf948ba4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.681859 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9407ebd6-89eb-4522-81c8-b224bf948ba4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qpxjn\" (UID: \"9407ebd6-89eb-4522-81c8-b224bf948ba4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.682477 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9407ebd6-89eb-4522-81c8-b224bf948ba4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qpxjn\" (UID: \"9407ebd6-89eb-4522-81c8-b224bf948ba4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.682494 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9407ebd6-89eb-4522-81c8-b224bf948ba4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qpxjn\" (UID: \"9407ebd6-89eb-4522-81c8-b224bf948ba4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.685751 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9407ebd6-89eb-4522-81c8-b224bf948ba4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qpxjn\" (UID: \"9407ebd6-89eb-4522-81c8-b224bf948ba4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.691000 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.691028 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.691037 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.691048 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.691057 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:37Z","lastTransitionTime":"2025-11-25T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.695644 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2n2x\" (UniqueName: \"kubernetes.io/projected/9407ebd6-89eb-4522-81c8-b224bf948ba4-kube-api-access-j2n2x\") pod \"ovnkube-control-plane-749d76644c-qpxjn\" (UID: \"9407ebd6-89eb-4522-81c8-b224bf948ba4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.719783 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" Nov 25 06:47:37 crc kubenswrapper[4482]: W1125 06:47:37.730846 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9407ebd6_89eb_4522_81c8_b224bf948ba4.slice/crio-6dfc2bd0976285d9048798c9ddc3f7fb3c216ee76dafcfdcc076622e337e94fa WatchSource:0}: Error finding container 6dfc2bd0976285d9048798c9ddc3f7fb3c216ee76dafcfdcc076622e337e94fa: Status 404 returned error can't find the container with id 6dfc2bd0976285d9048798c9ddc3f7fb3c216ee76dafcfdcc076622e337e94fa Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.793219 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.793264 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.793273 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.793290 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.793302 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:37Z","lastTransitionTime":"2025-11-25T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.829771 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.829775 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:37 crc kubenswrapper[4482]: E1125 06:47:37.829902 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:37 crc kubenswrapper[4482]: E1125 06:47:37.829994 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.829789 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:37 crc kubenswrapper[4482]: E1125 06:47:37.830078 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.896521 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.896870 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.896881 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.896900 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:37 crc kubenswrapper[4482]: I1125 06:47:37.896916 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:37Z","lastTransitionTime":"2025-11-25T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:37.999799 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:37.999835 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:37.999846 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:37.999862 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:37.999873 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:37Z","lastTransitionTime":"2025-11-25T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.008871 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" event={"ID":"9407ebd6-89eb-4522-81c8-b224bf948ba4","Type":"ContainerStarted","Data":"5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.009067 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" event={"ID":"9407ebd6-89eb-4522-81c8-b224bf948ba4","Type":"ContainerStarted","Data":"874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.009131 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" event={"ID":"9407ebd6-89eb-4522-81c8-b224bf948ba4","Type":"ContainerStarted","Data":"6dfc2bd0976285d9048798c9ddc3f7fb3c216ee76dafcfdcc076622e337e94fa"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.021475 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.037868 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.051051 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.066518 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.076890 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.086494 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.095611 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.102356 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.102449 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.102513 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.102578 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.102632 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:38Z","lastTransitionTime":"2025-11-25T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.109398 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:34Z\\\",\\\"message\\\":\\\"ue, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 06:47:34.613970 5844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:47:34.613985 5844 services_controller.go:451] Built service openshift-dns/dns-default cluster-wide LB for network=default: []services.LB{}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.116590 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.124389 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.133962 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.140953 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.150223 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.160339 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.167765 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.204853 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.204908 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.204918 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.204932 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.204940 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:38Z","lastTransitionTime":"2025-11-25T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.307282 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.307312 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.307344 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.307359 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.307373 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:38Z","lastTransitionTime":"2025-11-25T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.411036 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.411076 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.411087 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.411104 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.411121 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:38Z","lastTransitionTime":"2025-11-25T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.513122 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.513185 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.513198 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.513212 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.513221 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:38Z","lastTransitionTime":"2025-11-25T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.614926 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.614974 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.614985 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.615008 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.615022 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:38Z","lastTransitionTime":"2025-11-25T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.716513 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.716543 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.716555 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.716570 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.716581 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:38Z","lastTransitionTime":"2025-11-25T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.804249 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2xhh4"] Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.804878 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:38 crc kubenswrapper[4482]: E1125 06:47:38.805002 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.814571 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.817988 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.818029 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.818039 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.818053 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.818063 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:38Z","lastTransitionTime":"2025-11-25T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.824228 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.831781 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.839909 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.848444 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.855594 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.861982 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.870794 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.877343 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.884813 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.892393 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.900573 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.907980 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.915949 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.919794 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.919823 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.919832 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.919846 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.919855 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:38Z","lastTransitionTime":"2025-11-25T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.928002 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:34Z\\\",\\\"message\\\":\\\"ue, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 06:47:34.613970 5844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:47:34.613985 5844 services_controller.go:451] Built service openshift-dns/dns-default cluster-wide LB for network=default: []services.LB{}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.937156 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.993578 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdfzj\" (UniqueName: \"kubernetes.io/projected/0a1c9846-2a7e-402e-985f-51a244241bd7-kube-api-access-wdfzj\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:38 crc kubenswrapper[4482]: I1125 06:47:38.993661 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.021623 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.021676 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.021685 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.021714 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.021724 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:39Z","lastTransitionTime":"2025-11-25T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.094124 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.094162 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdfzj\" (UniqueName: \"kubernetes.io/projected/0a1c9846-2a7e-402e-985f-51a244241bd7-kube-api-access-wdfzj\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.094270 4482 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.094334 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs podName:0a1c9846-2a7e-402e-985f-51a244241bd7 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:39.594317384 +0000 UTC m=+34.082548653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs") pod "network-metrics-daemon-2xhh4" (UID: "0a1c9846-2a7e-402e-985f-51a244241bd7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.107729 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdfzj\" (UniqueName: \"kubernetes.io/projected/0a1c9846-2a7e-402e-985f-51a244241bd7-kube-api-access-wdfzj\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.123199 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.123230 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.123240 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.123252 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.123262 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:39Z","lastTransitionTime":"2025-11-25T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.225948 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.225996 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.226005 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.226021 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.226031 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:39Z","lastTransitionTime":"2025-11-25T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.328133 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.328203 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.328213 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.328229 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.328237 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:39Z","lastTransitionTime":"2025-11-25T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.431228 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.431268 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.431280 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.431296 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.431305 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:39Z","lastTransitionTime":"2025-11-25T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.533322 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.533361 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.533371 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.533388 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.533402 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:39Z","lastTransitionTime":"2025-11-25T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.601085 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.601239 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.601399 4482 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.601492 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:47:55.601417534 +0000 UTC m=+50.089648793 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.601590 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs podName:0a1c9846-2a7e-402e-985f-51a244241bd7 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:40.601577966 +0000 UTC m=+35.089809225 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs") pod "network-metrics-daemon-2xhh4" (UID: "0a1c9846-2a7e-402e-985f-51a244241bd7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.636156 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.636206 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.636219 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.636236 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.636431 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:39Z","lastTransitionTime":"2025-11-25T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.701860 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.701898 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.701921 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.701938 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.702029 4482 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.702069 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:55.702057214 +0000 UTC m=+50.190288474 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.702116 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.702152 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.702166 4482 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.702242 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:55.702226344 +0000 UTC m=+50.190457603 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.702337 4482 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.702352 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.702373 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:55.702363191 +0000 UTC m=+50.190594450 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.702377 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.702389 4482 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.702429 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:55.702419358 +0000 UTC m=+50.190650627 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.738941 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.738987 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.738996 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.739010 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.739019 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:39Z","lastTransitionTime":"2025-11-25T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.830101 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.830148 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.830263 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.830291 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.830361 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:39 crc kubenswrapper[4482]: E1125 06:47:39.830476 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.840876 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.840937 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.840951 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.840972 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.840987 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:39Z","lastTransitionTime":"2025-11-25T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.943394 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.943450 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.943460 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.943478 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:39 crc kubenswrapper[4482]: I1125 06:47:39.943489 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:39Z","lastTransitionTime":"2025-11-25T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.046284 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.046324 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.046336 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.046582 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.046605 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:40Z","lastTransitionTime":"2025-11-25T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.148940 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.148995 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.149006 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.149024 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.149035 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:40Z","lastTransitionTime":"2025-11-25T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.251593 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.251635 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.251644 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.251660 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.251670 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:40Z","lastTransitionTime":"2025-11-25T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.353820 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.353865 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.353875 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.353892 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.353902 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:40Z","lastTransitionTime":"2025-11-25T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.456500 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.456552 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.456563 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.456590 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.456604 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:40Z","lastTransitionTime":"2025-11-25T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.558321 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.558363 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.558373 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.558388 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.558402 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:40Z","lastTransitionTime":"2025-11-25T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.609943 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:40 crc kubenswrapper[4482]: E1125 06:47:40.610149 4482 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:47:40 crc kubenswrapper[4482]: E1125 06:47:40.610255 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs podName:0a1c9846-2a7e-402e-985f-51a244241bd7 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:42.610233924 +0000 UTC m=+37.098465183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs") pod "network-metrics-daemon-2xhh4" (UID: "0a1c9846-2a7e-402e-985f-51a244241bd7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.660956 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.660999 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.661008 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.661022 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.661030 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:40Z","lastTransitionTime":"2025-11-25T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.763719 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.763805 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.763814 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.763842 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.763852 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:40Z","lastTransitionTime":"2025-11-25T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.830386 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:40 crc kubenswrapper[4482]: E1125 06:47:40.830568 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.866114 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.866197 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.866208 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.866225 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.866235 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:40Z","lastTransitionTime":"2025-11-25T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.968152 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.968215 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.968222 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.968235 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:40 crc kubenswrapper[4482]: I1125 06:47:40.968245 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:40Z","lastTransitionTime":"2025-11-25T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.070210 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.070246 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.070255 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.070272 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.070282 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:41Z","lastTransitionTime":"2025-11-25T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.173857 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.173906 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.173915 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.173929 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.173938 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:41Z","lastTransitionTime":"2025-11-25T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.275651 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.275684 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.275696 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.275716 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.275724 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:41Z","lastTransitionTime":"2025-11-25T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.377920 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.377948 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.377958 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.377971 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.377982 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:41Z","lastTransitionTime":"2025-11-25T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.479421 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.479460 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.479471 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.479484 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.479492 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:41Z","lastTransitionTime":"2025-11-25T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.581422 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.581449 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.581457 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.581471 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.581481 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:41Z","lastTransitionTime":"2025-11-25T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.683462 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.683509 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.683518 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.683532 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.683542 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:41Z","lastTransitionTime":"2025-11-25T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.785542 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.785572 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.785586 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.785599 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.785607 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:41Z","lastTransitionTime":"2025-11-25T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.830392 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.830431 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:41 crc kubenswrapper[4482]: E1125 06:47:41.830492 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.830401 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:41 crc kubenswrapper[4482]: E1125 06:47:41.830573 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:41 crc kubenswrapper[4482]: E1125 06:47:41.830654 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.887353 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.887387 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.887396 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.887409 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.887417 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:41Z","lastTransitionTime":"2025-11-25T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.989751 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.989780 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.989788 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.989798 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:41 crc kubenswrapper[4482]: I1125 06:47:41.989805 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:41Z","lastTransitionTime":"2025-11-25T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.092330 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.092365 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.092380 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.092396 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.092404 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:42Z","lastTransitionTime":"2025-11-25T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.194409 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.194446 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.194454 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.194467 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.194474 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:42Z","lastTransitionTime":"2025-11-25T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.296082 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.296110 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.296133 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.296144 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.296153 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:42Z","lastTransitionTime":"2025-11-25T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.331824 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.351219 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.360354 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.369490 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.377542 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.385386 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.397684 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.397720 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.397731 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.397742 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.397750 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:42Z","lastTransitionTime":"2025-11-25T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.399108 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:34Z\\\",\\\"message\\\":\\\"ue, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 06:47:34.613970 5844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:47:34.613985 5844 services_controller.go:451] Built service openshift-dns/dns-default cluster-wide LB for network=default: []services.LB{}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.406421 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.414212 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.422260 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.431462 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.438068 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.446019 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.453621 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.461154 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.467985 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.476654 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:42Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.500621 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.500823 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.500902 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.500976 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.501035 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:42Z","lastTransitionTime":"2025-11-25T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.603065 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.603089 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.603098 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.603108 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.603130 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:42Z","lastTransitionTime":"2025-11-25T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.627954 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:42 crc kubenswrapper[4482]: E1125 06:47:42.628071 4482 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:47:42 crc kubenswrapper[4482]: E1125 06:47:42.628140 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs podName:0a1c9846-2a7e-402e-985f-51a244241bd7 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:46.628122595 +0000 UTC m=+41.116353864 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs") pod "network-metrics-daemon-2xhh4" (UID: "0a1c9846-2a7e-402e-985f-51a244241bd7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.704897 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.704962 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.704975 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.704998 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.705011 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:42Z","lastTransitionTime":"2025-11-25T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.807015 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.807054 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.807083 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.807100 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.807110 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:42Z","lastTransitionTime":"2025-11-25T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.829918 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:42 crc kubenswrapper[4482]: E1125 06:47:42.830072 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.908609 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.908635 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.908643 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.908657 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:42 crc kubenswrapper[4482]: I1125 06:47:42.908665 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:42Z","lastTransitionTime":"2025-11-25T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.010097 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.010145 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.010160 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.010207 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.010223 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:43Z","lastTransitionTime":"2025-11-25T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.111751 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.111859 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.111928 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.111987 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.112055 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:43Z","lastTransitionTime":"2025-11-25T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.213556 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.213584 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.213592 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.213602 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.213611 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:43Z","lastTransitionTime":"2025-11-25T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.314837 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.314871 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.314880 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.314892 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.314900 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:43Z","lastTransitionTime":"2025-11-25T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.416508 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.416646 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.416756 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.416842 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.416918 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:43Z","lastTransitionTime":"2025-11-25T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.519479 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.519511 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.519522 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.519534 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.519543 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:43Z","lastTransitionTime":"2025-11-25T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.621492 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.621546 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.621555 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.621576 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.621593 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:43Z","lastTransitionTime":"2025-11-25T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.723538 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.723579 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.723588 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.723607 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.723617 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:43Z","lastTransitionTime":"2025-11-25T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.825938 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.825984 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.825996 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.826011 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.826022 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:43Z","lastTransitionTime":"2025-11-25T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.830378 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.830428 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.830382 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:43 crc kubenswrapper[4482]: E1125 06:47:43.830485 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:43 crc kubenswrapper[4482]: E1125 06:47:43.830578 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:43 crc kubenswrapper[4482]: E1125 06:47:43.830650 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.928857 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.928895 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.928905 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.928919 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:43 crc kubenswrapper[4482]: I1125 06:47:43.928927 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:43Z","lastTransitionTime":"2025-11-25T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.030082 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.030115 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.030124 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.030137 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.030148 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:44Z","lastTransitionTime":"2025-11-25T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.132618 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.132658 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.132668 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.132685 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.132694 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:44Z","lastTransitionTime":"2025-11-25T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.234910 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.234943 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.234951 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.234963 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.234972 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:44Z","lastTransitionTime":"2025-11-25T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.336825 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.336865 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.336876 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.336891 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.336902 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:44Z","lastTransitionTime":"2025-11-25T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.438727 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.438758 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.438768 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.438780 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.438789 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:44Z","lastTransitionTime":"2025-11-25T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.540641 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.540674 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.540682 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.540697 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.540717 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:44Z","lastTransitionTime":"2025-11-25T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.642987 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.643016 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.643024 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.643036 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.643045 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:44Z","lastTransitionTime":"2025-11-25T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.744755 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.744777 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.744784 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.744796 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.744804 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:44Z","lastTransitionTime":"2025-11-25T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.828752 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.829582 4482 scope.go:117] "RemoveContainer" containerID="d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.829863 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:44 crc kubenswrapper[4482]: E1125 06:47:44.829966 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.846896 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.847079 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.847088 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.847100 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.847108 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:44Z","lastTransitionTime":"2025-11-25T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.948817 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.948845 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.948854 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.948866 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:44 crc kubenswrapper[4482]: I1125 06:47:44.948874 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:44Z","lastTransitionTime":"2025-11-25T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.027800 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/1.log" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.030810 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerStarted","Data":"9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f"} Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.031211 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.042237 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.050652 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.050681 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.050691 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.050705 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.050722 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:45Z","lastTransitionTime":"2025-11-25T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.053337 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.063901 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.075150 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.083983 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.104383 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.114672 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.125816 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.144807 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.152938 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.152989 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.153002 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.153016 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.153024 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:45Z","lastTransitionTime":"2025-11-25T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.159872 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:34Z\\\",\\\"message\\\":\\\"ue, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 06:47:34.613970 5844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:47:34.613985 5844 services_controller.go:451] Built service openshift-dns/dns-default cluster-wide LB for network=default: []services.LB{}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.172530 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.180726 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.198920 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.213460 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.237644 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.254798 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.254830 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.254838 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.254851 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.254860 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:45Z","lastTransitionTime":"2025-11-25T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.261612 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.357066 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.357110 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.357122 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.357136 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.357145 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:45Z","lastTransitionTime":"2025-11-25T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.459961 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.460033 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.460045 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.460067 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.460079 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:45Z","lastTransitionTime":"2025-11-25T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.561736 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.561784 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.561795 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.561811 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.561820 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:45Z","lastTransitionTime":"2025-11-25T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.663459 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.663495 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.663506 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.663519 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.663528 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:45Z","lastTransitionTime":"2025-11-25T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.765458 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.765489 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.765497 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.765510 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.765519 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:45Z","lastTransitionTime":"2025-11-25T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.830350 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.830463 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.830450 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:45 crc kubenswrapper[4482]: E1125 06:47:45.830599 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:45 crc kubenswrapper[4482]: E1125 06:47:45.830719 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:45 crc kubenswrapper[4482]: E1125 06:47:45.830825 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.842626 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.852395 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.861867 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.867644 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.867788 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.867853 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.867924 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.867976 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:45Z","lastTransitionTime":"2025-11-25T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.871677 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.885728 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.898644 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.908992 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.919240 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.929957 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.940788 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.950622 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.960463 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.970518 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.970612 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.970693 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.970791 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.970867 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:45Z","lastTransitionTime":"2025-11-25T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.980083 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:34Z\\\",\\\"message\\\":\\\"ue, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 06:47:34.613970 5844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:47:34.613985 5844 services_controller.go:451] Built service openshift-dns/dns-default cluster-wide LB for network=default: []services.LB{}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:45 crc kubenswrapper[4482]: I1125 06:47:45.992622 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.003480 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.011233 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.036647 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/2.log" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.037357 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/1.log" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.040527 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f" exitCode=1 Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.040587 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f"} Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.040650 4482 scope.go:117] "RemoveContainer" containerID="d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.041362 4482 scope.go:117] "RemoveContainer" containerID="9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f" Nov 25 06:47:46 crc kubenswrapper[4482]: E1125 06:47:46.042037 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.052341 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.062456 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.073315 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.073351 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.073361 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.073377 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.073387 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:46Z","lastTransitionTime":"2025-11-25T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.074749 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.088073 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d027cb3b216cded76deff149c9ab2512fa9d1ad6e716990cb90754edc6bf1dd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:34Z\\\",\\\"message\\\":\\\"ue, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1125 06:47:34.613970 5844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:34Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:47:34.613985 5844 services_controller.go:451] Built service openshift-dns/dns-default cluster-wide LB for network=default: []services.LB{}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:45Z\\\",\\\"message\\\":\\\"shift-dns/node-resolver-xk9c4\\\\nI1125 06:47:45.602488 6043 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1125 06:47:45.602506 6043 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-xk9c4 in node crc\\\\nI1125 06:47:45.602442 6043 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1125 06:47:45.602514 6043 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-xk9c4 after 0 failed attempt(s)\\\\nI1125 06:47:45.602725 6043 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-xk9c4\\\\nF1125 06:47:45.602506 6043 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.096528 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.106355 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.117337 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.124641 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.133512 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.142558 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.150512 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.158023 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.165821 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.175912 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.175945 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.175955 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.175972 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.175982 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:46Z","lastTransitionTime":"2025-11-25T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.177692 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.185657 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.194535 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:46Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.277519 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.277553 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.277563 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.277576 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.277586 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:46Z","lastTransitionTime":"2025-11-25T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.380156 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.380211 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.380220 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.380237 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.380250 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:46Z","lastTransitionTime":"2025-11-25T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.482395 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.482434 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.482445 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.482457 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.482467 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:46Z","lastTransitionTime":"2025-11-25T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.584823 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.584865 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.584878 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.584903 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.584916 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:46Z","lastTransitionTime":"2025-11-25T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.660501 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:46 crc kubenswrapper[4482]: E1125 06:47:46.660653 4482 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:47:46 crc kubenswrapper[4482]: E1125 06:47:46.660734 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs podName:0a1c9846-2a7e-402e-985f-51a244241bd7 nodeName:}" failed. No retries permitted until 2025-11-25 06:47:54.660706124 +0000 UTC m=+49.148937383 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs") pod "network-metrics-daemon-2xhh4" (UID: "0a1c9846-2a7e-402e-985f-51a244241bd7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.686869 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.686900 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.686910 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.686941 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.686951 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:46Z","lastTransitionTime":"2025-11-25T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.788727 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.788775 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.788789 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.788808 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.788819 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:46Z","lastTransitionTime":"2025-11-25T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.830348 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:46 crc kubenswrapper[4482]: E1125 06:47:46.830488 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.890323 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.890379 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.890390 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.890408 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.890418 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:46Z","lastTransitionTime":"2025-11-25T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.992699 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.992762 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.992774 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.992789 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:46 crc kubenswrapper[4482]: I1125 06:47:46.992798 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:46Z","lastTransitionTime":"2025-11-25T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.044381 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/2.log" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.047407 4482 scope.go:117] "RemoveContainer" containerID="9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f" Nov 25 06:47:47 crc kubenswrapper[4482]: E1125 06:47:47.047624 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.060234 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.060267 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.060282 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.060296 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.060305 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.060826 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: E1125 06:47:47.069244 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.070414 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.071475 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.071504 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.071515 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.071526 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.071533 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.081467 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: E1125 06:47:47.083752 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.086314 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.086340 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.086350 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.086363 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.086372 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.089738 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: E1125 06:47:47.094789 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.097479 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.097513 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.097522 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.097533 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.097550 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.100907 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: E1125 06:47:47.105926 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.108390 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.108565 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.108584 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.108594 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.108603 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.108612 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: E1125 06:47:47.117215 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: E1125 06:47:47.117317 4482 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.118607 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.118631 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.118640 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.118655 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.118667 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.119377 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.127574 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.136742 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.146078 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.156102 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.171438 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:45Z\\\",\\\"message\\\":\\\"shift-dns/node-resolver-xk9c4\\\\nI1125 06:47:45.602488 6043 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1125 06:47:45.602506 6043 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-xk9c4 in node crc\\\\nI1125 06:47:45.602442 6043 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1125 06:47:45.602514 6043 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-xk9c4 after 0 failed attempt(s)\\\\nI1125 06:47:45.602725 6043 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-xk9c4\\\\nF1125 06:47:45.602506 6043 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.180798 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.190077 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.200390 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.208493 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:47Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.221564 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.221597 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.221606 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.221643 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.221654 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.323596 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.323629 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.323638 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.323667 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.323681 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.425128 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.425189 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.425201 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.425215 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.425224 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.527155 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.527208 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.527218 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.527234 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.527245 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.629148 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.629213 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.629223 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.629242 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.629252 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.731746 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.731786 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.731796 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.731813 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.731824 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.830042 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.830074 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:47 crc kubenswrapper[4482]: E1125 06:47:47.830195 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:47 crc kubenswrapper[4482]: E1125 06:47:47.830266 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.830296 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:47 crc kubenswrapper[4482]: E1125 06:47:47.830381 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.834072 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.834119 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.834129 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.834148 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.834157 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.936394 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.936425 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.936433 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.936446 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:47 crc kubenswrapper[4482]: I1125 06:47:47.936454 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:47Z","lastTransitionTime":"2025-11-25T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.038456 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.038502 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.038510 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.038527 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.038538 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:48Z","lastTransitionTime":"2025-11-25T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.140543 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.140592 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.140602 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.140616 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.140626 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:48Z","lastTransitionTime":"2025-11-25T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.242754 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.242800 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.242810 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.242823 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.242832 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:48Z","lastTransitionTime":"2025-11-25T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.344936 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.344968 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.344978 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.344991 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.345000 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:48Z","lastTransitionTime":"2025-11-25T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.446393 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.446523 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.446603 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.446678 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.446753 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:48Z","lastTransitionTime":"2025-11-25T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.548759 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.548802 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.548814 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.548832 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.548842 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:48Z","lastTransitionTime":"2025-11-25T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.650849 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.650889 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.650900 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.650915 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.650924 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:48Z","lastTransitionTime":"2025-11-25T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.753740 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.753774 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.753783 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.753798 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.753806 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:48Z","lastTransitionTime":"2025-11-25T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.830727 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:48 crc kubenswrapper[4482]: E1125 06:47:48.830901 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.856350 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.856381 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.856391 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.856408 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.856419 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:48Z","lastTransitionTime":"2025-11-25T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.958548 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.958575 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.958583 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.958596 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:48 crc kubenswrapper[4482]: I1125 06:47:48.958603 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:48Z","lastTransitionTime":"2025-11-25T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.060237 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.060284 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.060294 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.060307 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.060316 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:49Z","lastTransitionTime":"2025-11-25T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.161795 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.161827 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.161837 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.161848 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.161859 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:49Z","lastTransitionTime":"2025-11-25T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.264247 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.264283 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.264296 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.264309 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.264316 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:49Z","lastTransitionTime":"2025-11-25T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.366404 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.366796 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.366885 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.366965 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.367024 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:49Z","lastTransitionTime":"2025-11-25T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.469688 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.469752 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.469765 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.469777 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.469785 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:49Z","lastTransitionTime":"2025-11-25T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.571619 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.571654 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.571663 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.571676 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.571684 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:49Z","lastTransitionTime":"2025-11-25T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.673326 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.673525 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.673586 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.673683 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.673757 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:49Z","lastTransitionTime":"2025-11-25T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.775106 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.775131 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.775141 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.775151 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.775159 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:49Z","lastTransitionTime":"2025-11-25T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.830005 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.830021 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.830058 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:49 crc kubenswrapper[4482]: E1125 06:47:49.830136 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:49 crc kubenswrapper[4482]: E1125 06:47:49.830234 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:49 crc kubenswrapper[4482]: E1125 06:47:49.830277 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.877437 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.877562 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.877623 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.877677 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.877749 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:49Z","lastTransitionTime":"2025-11-25T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.980255 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.980383 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.980532 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.980670 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:49 crc kubenswrapper[4482]: I1125 06:47:49.980817 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:49Z","lastTransitionTime":"2025-11-25T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.082656 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.082689 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.082715 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.082737 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.082747 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:50Z","lastTransitionTime":"2025-11-25T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.184485 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.184523 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.184532 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.184545 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.184554 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:50Z","lastTransitionTime":"2025-11-25T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.286774 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.286813 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.286822 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.286836 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.286844 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:50Z","lastTransitionTime":"2025-11-25T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.388647 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.388704 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.388716 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.388747 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.388760 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:50Z","lastTransitionTime":"2025-11-25T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.490892 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.491259 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.491373 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.491457 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.491538 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:50Z","lastTransitionTime":"2025-11-25T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.593696 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.593750 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.593759 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.593773 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.593781 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:50Z","lastTransitionTime":"2025-11-25T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.696239 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.696466 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.696535 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.696591 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.696652 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:50Z","lastTransitionTime":"2025-11-25T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.799259 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.799319 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.799329 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.799340 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.799348 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:50Z","lastTransitionTime":"2025-11-25T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.830215 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:50 crc kubenswrapper[4482]: E1125 06:47:50.830483 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.900686 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.900755 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.900765 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.900781 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:50 crc kubenswrapper[4482]: I1125 06:47:50.900791 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:50Z","lastTransitionTime":"2025-11-25T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.003083 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.003122 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.003135 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.003150 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.003159 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:51Z","lastTransitionTime":"2025-11-25T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.104882 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.104911 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.104920 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.104931 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.104938 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:51Z","lastTransitionTime":"2025-11-25T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.207221 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.207276 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.207287 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.207301 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.207313 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:51Z","lastTransitionTime":"2025-11-25T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.309256 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.309310 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.309319 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.309331 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.309339 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:51Z","lastTransitionTime":"2025-11-25T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.411094 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.411160 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.411204 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.411230 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.411252 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:51Z","lastTransitionTime":"2025-11-25T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.513927 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.513962 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.513971 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.514003 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.514014 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:51Z","lastTransitionTime":"2025-11-25T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.615788 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.615811 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.615819 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.615828 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.615836 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:51Z","lastTransitionTime":"2025-11-25T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.717351 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.717372 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.717379 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.717389 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.717396 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:51Z","lastTransitionTime":"2025-11-25T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.819293 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.819311 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.819319 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.819328 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.819335 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:51Z","lastTransitionTime":"2025-11-25T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.829733 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:51 crc kubenswrapper[4482]: E1125 06:47:51.829820 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.829739 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.829733 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:51 crc kubenswrapper[4482]: E1125 06:47:51.829931 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:51 crc kubenswrapper[4482]: E1125 06:47:51.829880 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.920835 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.920862 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.920869 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.920896 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:51 crc kubenswrapper[4482]: I1125 06:47:51.920905 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:51Z","lastTransitionTime":"2025-11-25T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.022474 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.022501 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.022509 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.022520 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.022527 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:52Z","lastTransitionTime":"2025-11-25T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.124506 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.124534 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.124542 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.124552 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.124558 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:52Z","lastTransitionTime":"2025-11-25T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.226390 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.226545 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.226620 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.226694 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.226762 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:52Z","lastTransitionTime":"2025-11-25T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.329004 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.329038 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.329050 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.329063 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.329075 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:52Z","lastTransitionTime":"2025-11-25T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.431052 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.431087 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.431097 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.431110 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.431119 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:52Z","lastTransitionTime":"2025-11-25T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.532921 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.532941 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.532949 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.532961 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.532971 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:52Z","lastTransitionTime":"2025-11-25T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.635201 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.635231 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.635239 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.635250 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.635259 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:52Z","lastTransitionTime":"2025-11-25T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.738559 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.738607 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.738617 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.738632 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.738644 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:52Z","lastTransitionTime":"2025-11-25T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.830526 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:52 crc kubenswrapper[4482]: E1125 06:47:52.830634 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.841068 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.841091 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.841099 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.841109 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.841116 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:52Z","lastTransitionTime":"2025-11-25T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.942845 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.942872 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.942881 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.942891 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:52 crc kubenswrapper[4482]: I1125 06:47:52.942899 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:52Z","lastTransitionTime":"2025-11-25T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.044522 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.044555 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.044563 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.044575 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.044585 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:53Z","lastTransitionTime":"2025-11-25T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.146279 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.146448 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.146513 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.146576 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.146632 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:53Z","lastTransitionTime":"2025-11-25T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.248398 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.248422 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.248431 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.248441 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.248449 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:53Z","lastTransitionTime":"2025-11-25T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.349898 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.350061 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.350211 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.350313 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.350365 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:53Z","lastTransitionTime":"2025-11-25T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.453007 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.453796 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.453815 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.453829 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.453839 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:53Z","lastTransitionTime":"2025-11-25T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.555918 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.555964 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.555974 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.555991 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.556003 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:53Z","lastTransitionTime":"2025-11-25T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.657639 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.657664 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.657673 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.657709 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.657719 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:53Z","lastTransitionTime":"2025-11-25T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.759812 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.759835 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.759842 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.759852 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.759859 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:53Z","lastTransitionTime":"2025-11-25T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.829891 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.829905 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.829896 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:53 crc kubenswrapper[4482]: E1125 06:47:53.829994 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:53 crc kubenswrapper[4482]: E1125 06:47:53.830123 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:53 crc kubenswrapper[4482]: E1125 06:47:53.830219 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.861573 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.861614 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.861623 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.861632 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.861640 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:53Z","lastTransitionTime":"2025-11-25T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.963240 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.963347 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.963374 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.963384 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:53 crc kubenswrapper[4482]: I1125 06:47:53.963407 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:53Z","lastTransitionTime":"2025-11-25T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.064380 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.064500 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.064596 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.064661 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.064715 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:54Z","lastTransitionTime":"2025-11-25T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.166541 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.166574 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.166583 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.166596 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.166605 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:54Z","lastTransitionTime":"2025-11-25T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.269520 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.269585 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.269598 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.269619 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.269629 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:54Z","lastTransitionTime":"2025-11-25T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.371772 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.371802 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.371810 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.371823 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.371832 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:54Z","lastTransitionTime":"2025-11-25T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.473880 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.473911 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.473923 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.473935 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.473944 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:54Z","lastTransitionTime":"2025-11-25T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.575684 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.575906 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.575976 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.576040 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.576092 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:54Z","lastTransitionTime":"2025-11-25T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.678226 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.678272 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.678282 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.678298 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.678308 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:54Z","lastTransitionTime":"2025-11-25T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.720544 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:54 crc kubenswrapper[4482]: E1125 06:47:54.720678 4482 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:47:54 crc kubenswrapper[4482]: E1125 06:47:54.720765 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs podName:0a1c9846-2a7e-402e-985f-51a244241bd7 nodeName:}" failed. No retries permitted until 2025-11-25 06:48:10.720742398 +0000 UTC m=+65.208973668 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs") pod "network-metrics-daemon-2xhh4" (UID: "0a1c9846-2a7e-402e-985f-51a244241bd7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.780851 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.780888 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.780897 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.780934 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.780947 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:54Z","lastTransitionTime":"2025-11-25T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.830627 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:54 crc kubenswrapper[4482]: E1125 06:47:54.830889 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.883484 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.883620 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.883694 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.883804 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.883869 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:54Z","lastTransitionTime":"2025-11-25T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.986158 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.986201 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.986210 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.986222 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:54 crc kubenswrapper[4482]: I1125 06:47:54.986231 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:54Z","lastTransitionTime":"2025-11-25T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.088192 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.088234 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.088244 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.088255 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.088263 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:55Z","lastTransitionTime":"2025-11-25T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.190286 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.190321 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.190332 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.190348 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.190359 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:55Z","lastTransitionTime":"2025-11-25T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.292152 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.292196 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.292205 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.292215 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.292222 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:55Z","lastTransitionTime":"2025-11-25T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.393241 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.393269 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.393278 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.393363 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.393373 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:55Z","lastTransitionTime":"2025-11-25T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.494852 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.494872 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.494881 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.494891 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.494898 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:55Z","lastTransitionTime":"2025-11-25T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.597481 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.597562 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.597577 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.597600 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.597612 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:55Z","lastTransitionTime":"2025-11-25T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.632556 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.632657 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:48:27.63262306 +0000 UTC m=+82.120854329 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.699985 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.700011 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.700021 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.700032 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.700039 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:55Z","lastTransitionTime":"2025-11-25T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.733493 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.733542 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.733565 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.733612 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.733709 4482 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.733750 4482 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.733767 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:48:27.73375561 +0000 UTC m=+82.221986870 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.733796 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:48:27.733782591 +0000 UTC m=+82.222013860 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.733793 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.733831 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.733845 4482 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.733857 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.733874 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 06:48:27.73386723 +0000 UTC m=+82.222098499 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.733876 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.733897 4482 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.733956 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 06:48:27.733938806 +0000 UTC m=+82.222170065 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.802451 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.802485 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.802498 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.802509 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.802518 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:55Z","lastTransitionTime":"2025-11-25T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.830226 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.830243 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.830335 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.830375 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.830571 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:55 crc kubenswrapper[4482]: E1125 06:47:55.830611 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.839756 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.848315 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.855789 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.863793 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.878304 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:45Z\\\",\\\"message\\\":\\\"shift-dns/node-resolver-xk9c4\\\\nI1125 06:47:45.602488 6043 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1125 06:47:45.602506 6043 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-xk9c4 in node crc\\\\nI1125 06:47:45.602442 6043 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1125 06:47:45.602514 6043 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-xk9c4 after 0 failed attempt(s)\\\\nI1125 06:47:45.602725 6043 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-xk9c4\\\\nF1125 06:47:45.602506 6043 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.886985 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.896506 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.904106 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.904131 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.904140 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.904153 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.904161 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:55Z","lastTransitionTime":"2025-11-25T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.906916 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.913421 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.921275 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.928485 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.935782 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.942681 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.950818 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.957725 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:55 crc kubenswrapper[4482]: I1125 06:47:55.965126 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:55Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.005602 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.005634 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.005651 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.005665 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.005720 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:56Z","lastTransitionTime":"2025-11-25T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.109247 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.109299 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.109314 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.109339 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.109356 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:56Z","lastTransitionTime":"2025-11-25T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.211970 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.212019 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.212031 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.212050 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.212067 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:56Z","lastTransitionTime":"2025-11-25T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.314800 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.314841 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.314853 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.314868 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.314879 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:56Z","lastTransitionTime":"2025-11-25T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.416582 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.416629 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.416643 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.416664 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.416679 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:56Z","lastTransitionTime":"2025-11-25T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.518378 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.518425 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.518439 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.518456 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.518468 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:56Z","lastTransitionTime":"2025-11-25T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.620286 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.620340 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.620351 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.620367 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.620381 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:56Z","lastTransitionTime":"2025-11-25T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.723017 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.723216 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.723227 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.723245 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.723273 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:56Z","lastTransitionTime":"2025-11-25T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.824879 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.824906 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.824915 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.824927 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.824935 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:56Z","lastTransitionTime":"2025-11-25T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.830362 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:56 crc kubenswrapper[4482]: E1125 06:47:56.830538 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.926578 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.926613 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.926622 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.926634 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:56 crc kubenswrapper[4482]: I1125 06:47:56.926643 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:56Z","lastTransitionTime":"2025-11-25T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.029256 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.029285 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.029295 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.029306 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.029314 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.131529 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.131578 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.131587 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.131602 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.131610 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.233818 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.233864 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.233872 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.233885 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.233894 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.268013 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.268047 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.268057 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.268069 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.268080 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: E1125 06:47:57.277665 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:57Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.279843 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.279865 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.279873 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.279883 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.279891 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: E1125 06:47:57.288530 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:57Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.290813 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.290879 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.290891 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.290900 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.290907 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: E1125 06:47:57.300414 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:57Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.302482 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.302543 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.302559 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.302584 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.302599 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: E1125 06:47:57.310442 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:57Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.312492 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.312520 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.312531 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.312540 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.312548 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: E1125 06:47:57.320095 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:57Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:57 crc kubenswrapper[4482]: E1125 06:47:57.320214 4482 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.335067 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.335088 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.335096 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.335105 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.335113 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.436523 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.436542 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.436550 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.436558 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.436565 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.538322 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.538756 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.538826 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.538887 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.538946 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.641043 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.641226 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.641284 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.641350 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.641406 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.743217 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.743287 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.743298 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.743309 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.743316 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.830522 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.830522 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.830533 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:57 crc kubenswrapper[4482]: E1125 06:47:57.830728 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:57 crc kubenswrapper[4482]: E1125 06:47:57.830841 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:57 crc kubenswrapper[4482]: E1125 06:47:57.830936 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.845382 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.845426 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.845443 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.845463 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.845477 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.946800 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.946840 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.946853 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.946869 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:57 crc kubenswrapper[4482]: I1125 06:47:57.946879 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:57Z","lastTransitionTime":"2025-11-25T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.048754 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.048796 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.048806 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.048821 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.048832 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:58Z","lastTransitionTime":"2025-11-25T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.150541 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.150584 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.150595 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.150611 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.150624 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:58Z","lastTransitionTime":"2025-11-25T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.252746 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.252790 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.252800 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.252826 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.252836 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:58Z","lastTransitionTime":"2025-11-25T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.342654 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.354324 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.354800 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.354902 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.354960 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.355058 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.355149 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:58Z","lastTransitionTime":"2025-11-25T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.358160 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:45Z\\\",\\\"message\\\":\\\"shift-dns/node-resolver-xk9c4\\\\nI1125 06:47:45.602488 6043 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1125 06:47:45.602506 6043 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-xk9c4 in node crc\\\\nI1125 06:47:45.602442 6043 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1125 06:47:45.602514 6043 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-xk9c4 after 0 failed attempt(s)\\\\nI1125 06:47:45.602725 6043 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-xk9c4\\\\nF1125 06:47:45.602506 6043 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.365320 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.372874 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.380814 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.388438 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.396647 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.405632 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.415634 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.422947 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.433534 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.441931 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.449307 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.456061 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.457779 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.457836 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.457845 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.457862 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.457893 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:58Z","lastTransitionTime":"2025-11-25T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.465243 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.472035 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.479720 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:47:58Z is after 2025-08-24T17:21:41Z" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.559789 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.559830 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.559840 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.559853 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.559860 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:58Z","lastTransitionTime":"2025-11-25T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.661528 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.661555 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.661563 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.661576 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.661584 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:58Z","lastTransitionTime":"2025-11-25T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.763977 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.764004 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.764011 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.764024 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.764055 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:58Z","lastTransitionTime":"2025-11-25T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.830117 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:47:58 crc kubenswrapper[4482]: E1125 06:47:58.830287 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.865954 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.866013 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.866025 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.866048 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.866061 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:58Z","lastTransitionTime":"2025-11-25T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.967438 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.967465 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.967473 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.967503 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:58 crc kubenswrapper[4482]: I1125 06:47:58.967514 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:58Z","lastTransitionTime":"2025-11-25T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.069893 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.069923 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.069932 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.069943 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.069950 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:59Z","lastTransitionTime":"2025-11-25T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.171290 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.171427 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.171501 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.171559 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.171614 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:59Z","lastTransitionTime":"2025-11-25T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.273435 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.273477 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.273486 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.273503 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.273512 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:59Z","lastTransitionTime":"2025-11-25T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.375842 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.375886 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.375898 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.375920 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.375934 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:59Z","lastTransitionTime":"2025-11-25T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.478262 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.478296 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.478304 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.478317 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.478325 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:59Z","lastTransitionTime":"2025-11-25T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.579782 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.579814 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.579822 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.579835 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.579844 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:59Z","lastTransitionTime":"2025-11-25T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.681110 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.681132 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.681140 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.681150 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.681158 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:59Z","lastTransitionTime":"2025-11-25T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.782701 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.782731 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.782751 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.782763 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.782770 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:59Z","lastTransitionTime":"2025-11-25T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.830563 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.830701 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:47:59 crc kubenswrapper[4482]: E1125 06:47:59.830722 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.830837 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:47:59 crc kubenswrapper[4482]: E1125 06:47:59.830901 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:47:59 crc kubenswrapper[4482]: E1125 06:47:59.831272 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.831412 4482 scope.go:117] "RemoveContainer" containerID="9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f" Nov 25 06:47:59 crc kubenswrapper[4482]: E1125 06:47:59.831530 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.884756 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.884789 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.884798 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.884809 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.884819 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:59Z","lastTransitionTime":"2025-11-25T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.986474 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.986513 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.986542 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.986559 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:47:59 crc kubenswrapper[4482]: I1125 06:47:59.986569 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:47:59Z","lastTransitionTime":"2025-11-25T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.088389 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.088450 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.088463 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.088482 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.088495 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:00Z","lastTransitionTime":"2025-11-25T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.190429 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.190477 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.190489 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.190508 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.190520 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:00Z","lastTransitionTime":"2025-11-25T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.292106 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.292166 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.292191 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.292218 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.292230 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:00Z","lastTransitionTime":"2025-11-25T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.393948 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.393990 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.393999 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.394017 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.394032 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:00Z","lastTransitionTime":"2025-11-25T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.496410 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.496480 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.496490 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.496510 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.496522 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:00Z","lastTransitionTime":"2025-11-25T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.599272 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.599338 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.599348 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.599369 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.599380 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:00Z","lastTransitionTime":"2025-11-25T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.702273 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.702317 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.702328 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.702346 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.702362 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:00Z","lastTransitionTime":"2025-11-25T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.803863 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.803979 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.804041 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.804108 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.804165 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:00Z","lastTransitionTime":"2025-11-25T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.830423 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:00 crc kubenswrapper[4482]: E1125 06:48:00.830568 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.905714 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.905767 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.905781 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.905795 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:00 crc kubenswrapper[4482]: I1125 06:48:00.905806 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:00Z","lastTransitionTime":"2025-11-25T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.007300 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.007337 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.007349 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.007365 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.007376 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:01Z","lastTransitionTime":"2025-11-25T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.109346 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.109392 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.109402 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.109416 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.109427 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:01Z","lastTransitionTime":"2025-11-25T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.211014 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.211066 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.211077 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.211092 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.211105 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:01Z","lastTransitionTime":"2025-11-25T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.312994 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.313024 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.313032 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.313042 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.313050 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:01Z","lastTransitionTime":"2025-11-25T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.414624 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.414648 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.414655 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.414665 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.414687 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:01Z","lastTransitionTime":"2025-11-25T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.516013 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.516040 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.516048 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.516056 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.516063 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:01Z","lastTransitionTime":"2025-11-25T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.618001 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.618047 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.618057 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.618069 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.618077 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:01Z","lastTransitionTime":"2025-11-25T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.720370 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.720402 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.720412 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.720426 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.720437 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:01Z","lastTransitionTime":"2025-11-25T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.822526 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.822561 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.822570 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.822585 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.822593 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:01Z","lastTransitionTime":"2025-11-25T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.830094 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.830093 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:01 crc kubenswrapper[4482]: E1125 06:48:01.830193 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.830234 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:01 crc kubenswrapper[4482]: E1125 06:48:01.830296 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:01 crc kubenswrapper[4482]: E1125 06:48:01.830483 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.924709 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.924749 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.924758 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.924772 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:01 crc kubenswrapper[4482]: I1125 06:48:01.924784 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:01Z","lastTransitionTime":"2025-11-25T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.026694 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.026715 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.026723 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.026736 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.026757 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:02Z","lastTransitionTime":"2025-11-25T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.129078 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.129228 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.129289 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.129348 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.129405 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:02Z","lastTransitionTime":"2025-11-25T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.230795 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.230833 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.230845 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.230860 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.230871 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:02Z","lastTransitionTime":"2025-11-25T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.332452 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.332476 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.332486 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.332501 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.332516 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:02Z","lastTransitionTime":"2025-11-25T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.434607 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.434644 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.434656 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.434670 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.434680 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:02Z","lastTransitionTime":"2025-11-25T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.536899 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.536936 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.536946 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.536958 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.536967 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:02Z","lastTransitionTime":"2025-11-25T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.639073 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.639113 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.639123 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.639135 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.639146 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:02Z","lastTransitionTime":"2025-11-25T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.740759 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.740785 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.740794 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.740804 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.740813 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:02Z","lastTransitionTime":"2025-11-25T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.830775 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:02 crc kubenswrapper[4482]: E1125 06:48:02.830906 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.842731 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.842768 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.842778 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.842792 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.842805 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:02Z","lastTransitionTime":"2025-11-25T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.944609 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.944642 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.944672 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.944684 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:02 crc kubenswrapper[4482]: I1125 06:48:02.944692 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:02Z","lastTransitionTime":"2025-11-25T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.046126 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.046184 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.046196 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.046209 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.046219 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:03Z","lastTransitionTime":"2025-11-25T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.147994 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.148059 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.148072 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.148086 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.148096 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:03Z","lastTransitionTime":"2025-11-25T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.250404 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.250435 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.250443 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.250455 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.250462 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:03Z","lastTransitionTime":"2025-11-25T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.352100 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.352130 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.352139 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.352151 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.352158 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:03Z","lastTransitionTime":"2025-11-25T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.455004 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.455044 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.455054 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.455068 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.455080 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:03Z","lastTransitionTime":"2025-11-25T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.557113 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.557158 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.557184 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.557203 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.557214 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:03Z","lastTransitionTime":"2025-11-25T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.659537 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.659644 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.659656 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.659676 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.659687 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:03Z","lastTransitionTime":"2025-11-25T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.761896 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.761941 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.761953 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.761971 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.761985 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:03Z","lastTransitionTime":"2025-11-25T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.830083 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.830141 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.830151 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:03 crc kubenswrapper[4482]: E1125 06:48:03.830223 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:03 crc kubenswrapper[4482]: E1125 06:48:03.830326 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:03 crc kubenswrapper[4482]: E1125 06:48:03.830490 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.863851 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.863888 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.863901 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.863913 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.863923 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:03Z","lastTransitionTime":"2025-11-25T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.966275 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.966303 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.966313 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.966324 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:03 crc kubenswrapper[4482]: I1125 06:48:03.966332 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:03Z","lastTransitionTime":"2025-11-25T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.069078 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.069110 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.069120 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.069134 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.069143 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:04Z","lastTransitionTime":"2025-11-25T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.170780 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.170803 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.170812 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.170822 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.170829 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:04Z","lastTransitionTime":"2025-11-25T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.273365 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.273400 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.273409 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.273419 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.273426 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:04Z","lastTransitionTime":"2025-11-25T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.375874 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.375905 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.375914 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.375924 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.375932 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:04Z","lastTransitionTime":"2025-11-25T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.477365 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.477418 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.477428 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.477439 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.477446 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:04Z","lastTransitionTime":"2025-11-25T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.579562 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.579617 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.579630 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.579652 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.579669 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:04Z","lastTransitionTime":"2025-11-25T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.682089 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.682122 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.682133 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.682147 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.682157 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:04Z","lastTransitionTime":"2025-11-25T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.784382 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.784407 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.784416 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.784428 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.784435 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:04Z","lastTransitionTime":"2025-11-25T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.830769 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:04 crc kubenswrapper[4482]: E1125 06:48:04.830900 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.887296 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.887330 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.887339 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.887354 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.887367 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:04Z","lastTransitionTime":"2025-11-25T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.989516 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.989610 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.989719 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.989792 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:04 crc kubenswrapper[4482]: I1125 06:48:04.989855 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:04Z","lastTransitionTime":"2025-11-25T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.091409 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.091443 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.091452 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.091467 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.091475 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:05Z","lastTransitionTime":"2025-11-25T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.193263 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.193304 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.193314 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.193331 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.193341 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:05Z","lastTransitionTime":"2025-11-25T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.295789 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.295833 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.295845 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.295863 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.295876 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:05Z","lastTransitionTime":"2025-11-25T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.398449 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.398716 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.398807 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.398873 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.398945 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:05Z","lastTransitionTime":"2025-11-25T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.501300 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.501327 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.501335 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.501347 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.501356 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:05Z","lastTransitionTime":"2025-11-25T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.603403 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.603627 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.603694 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.603766 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.603824 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:05Z","lastTransitionTime":"2025-11-25T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.705227 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.705256 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.705265 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.705276 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.705285 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:05Z","lastTransitionTime":"2025-11-25T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.806666 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.806916 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.806994 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.807067 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.807125 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:05Z","lastTransitionTime":"2025-11-25T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.830157 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:05 crc kubenswrapper[4482]: E1125 06:48:05.830303 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.830194 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.830359 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:05 crc kubenswrapper[4482]: E1125 06:48:05.830445 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:05 crc kubenswrapper[4482]: E1125 06:48:05.830590 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.842608 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.852200 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.860212 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d314f82-e6a3-44d6-b59b-b68552730866\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d5f730d9fc2cf67bca05c6b7ca8035f813d91a8ac6b069f70457b5a63e9d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://645d4b2d1e65d0d5b0e29914ac6e7ac26a91d65ad5ea42a309e983cf633e9fb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c736aa6a7231244785b8651eda784a6aa13f745d1e95a7d4963458ebe6647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.869966 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.885106 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.895149 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.905095 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.908926 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.908948 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.908956 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.908969 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.908979 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:05Z","lastTransitionTime":"2025-11-25T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.914776 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.923144 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.932442 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.939518 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.947597 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.959782 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:45Z\\\",\\\"message\\\":\\\"shift-dns/node-resolver-xk9c4\\\\nI1125 06:47:45.602488 6043 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1125 06:47:45.602506 6043 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-xk9c4 in node crc\\\\nI1125 06:47:45.602442 6043 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1125 06:47:45.602514 6043 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-xk9c4 after 0 failed attempt(s)\\\\nI1125 06:47:45.602725 6043 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-xk9c4\\\\nF1125 06:47:45.602506 6043 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.967330 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.975766 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.984471 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:05 crc kubenswrapper[4482]: I1125 06:48:05.992431 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:05Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.011061 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.011087 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.011095 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.011109 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.011117 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:06Z","lastTransitionTime":"2025-11-25T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.112630 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.112661 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.112670 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.112682 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.112690 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:06Z","lastTransitionTime":"2025-11-25T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.214626 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.214650 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.214659 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.214669 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.214677 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:06Z","lastTransitionTime":"2025-11-25T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.316100 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.316130 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.316139 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.316151 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.316158 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:06Z","lastTransitionTime":"2025-11-25T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.418385 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.418412 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.418421 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.418430 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.418437 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:06Z","lastTransitionTime":"2025-11-25T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.520430 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.520469 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.520480 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.520491 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.520498 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:06Z","lastTransitionTime":"2025-11-25T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.621697 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.621728 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.621737 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.621760 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.621768 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:06Z","lastTransitionTime":"2025-11-25T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.724127 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.724183 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.724193 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.724207 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.724215 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:06Z","lastTransitionTime":"2025-11-25T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.826685 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.826746 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.826765 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.826783 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.826792 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:06Z","lastTransitionTime":"2025-11-25T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.830209 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:06 crc kubenswrapper[4482]: E1125 06:48:06.830321 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.928899 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.928928 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.928936 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.928947 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:06 crc kubenswrapper[4482]: I1125 06:48:06.928974 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:06Z","lastTransitionTime":"2025-11-25T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.031140 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.031183 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.031192 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.031202 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.031209 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.132602 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.132623 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.132650 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.132661 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.132668 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.234124 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.234180 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.234188 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.234196 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.234203 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.335865 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.335898 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.335908 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.335923 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.335932 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.437715 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.437742 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.437758 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.437771 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.437779 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.539919 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.539953 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.539962 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.539973 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.539981 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.548018 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.548040 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.548048 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.548059 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.548066 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: E1125 06:48:07.557223 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:07Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.559797 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.559825 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.559834 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.559846 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.559854 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: E1125 06:48:07.568434 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:07Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.570642 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.570668 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.570676 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.570690 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.570698 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: E1125 06:48:07.578375 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:07Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.580908 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.580933 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.580941 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.580950 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.580958 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: E1125 06:48:07.588761 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:07Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.590650 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.590670 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.590678 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.590686 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.590693 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: E1125 06:48:07.598540 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:07Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:07 crc kubenswrapper[4482]: E1125 06:48:07.598675 4482 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.641366 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.641400 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.641409 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.641435 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.641444 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.743298 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.743323 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.743331 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.743341 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.743348 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.830545 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:07 crc kubenswrapper[4482]: E1125 06:48:07.830658 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.830713 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:07 crc kubenswrapper[4482]: E1125 06:48:07.830766 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.830938 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:07 crc kubenswrapper[4482]: E1125 06:48:07.830985 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.844905 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.844947 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.844954 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.844964 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.844974 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.946059 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.946086 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.946093 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.946102 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:07 crc kubenswrapper[4482]: I1125 06:48:07.946125 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:07Z","lastTransitionTime":"2025-11-25T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.047340 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.047371 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.047381 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.047391 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.047430 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:08Z","lastTransitionTime":"2025-11-25T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.151305 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.151344 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.151353 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.151366 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.151377 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:08Z","lastTransitionTime":"2025-11-25T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.253491 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.253516 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.253524 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.253534 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.253541 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:08Z","lastTransitionTime":"2025-11-25T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.355446 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.355465 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.355473 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.355483 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.355490 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:08Z","lastTransitionTime":"2025-11-25T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.457350 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.457370 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.457378 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.457387 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.457394 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:08Z","lastTransitionTime":"2025-11-25T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.559146 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.559192 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.559201 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.559210 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.559217 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:08Z","lastTransitionTime":"2025-11-25T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.660775 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.660797 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.660805 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.660815 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.660822 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:08Z","lastTransitionTime":"2025-11-25T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.762760 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.762782 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.762789 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.762799 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.762807 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:08Z","lastTransitionTime":"2025-11-25T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.829750 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:08 crc kubenswrapper[4482]: E1125 06:48:08.829882 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.864326 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.864353 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.864362 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.864374 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.864383 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:08Z","lastTransitionTime":"2025-11-25T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.966704 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.966723 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.966731 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.966743 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:08 crc kubenswrapper[4482]: I1125 06:48:08.966751 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:08Z","lastTransitionTime":"2025-11-25T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.068648 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.068672 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.068697 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.068710 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.068718 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:09Z","lastTransitionTime":"2025-11-25T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.171044 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.171071 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.171079 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.171109 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.171118 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:09Z","lastTransitionTime":"2025-11-25T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.273019 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.273058 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.273069 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.273083 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.273093 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:09Z","lastTransitionTime":"2025-11-25T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.374560 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.374599 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.374607 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.374621 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.374631 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:09Z","lastTransitionTime":"2025-11-25T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.476443 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.476475 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.476484 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.476496 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.476506 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:09Z","lastTransitionTime":"2025-11-25T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.578039 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.578063 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.578074 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.578087 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.578095 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:09Z","lastTransitionTime":"2025-11-25T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.679879 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.679914 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.679924 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.679935 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.679945 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:09Z","lastTransitionTime":"2025-11-25T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.781656 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.781684 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.781692 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.781704 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.781710 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:09Z","lastTransitionTime":"2025-11-25T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.830643 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.830667 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.830662 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:09 crc kubenswrapper[4482]: E1125 06:48:09.830772 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:09 crc kubenswrapper[4482]: E1125 06:48:09.830827 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:09 crc kubenswrapper[4482]: E1125 06:48:09.830899 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.883488 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.883507 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.883515 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.883525 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.883533 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:09Z","lastTransitionTime":"2025-11-25T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.984711 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.984739 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.984748 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.984769 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:09 crc kubenswrapper[4482]: I1125 06:48:09.984777 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:09Z","lastTransitionTime":"2025-11-25T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.086377 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.086403 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.086412 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.086422 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.086428 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:10Z","lastTransitionTime":"2025-11-25T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.188449 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.188474 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.188482 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.188491 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.188498 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:10Z","lastTransitionTime":"2025-11-25T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.290685 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.290736 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.290751 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.290787 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.290801 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:10Z","lastTransitionTime":"2025-11-25T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.392375 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.392404 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.392412 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.392423 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.392433 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:10Z","lastTransitionTime":"2025-11-25T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.494012 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.494034 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.494043 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.494053 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.494061 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:10Z","lastTransitionTime":"2025-11-25T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.595883 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.595911 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.595921 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.595932 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.595938 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:10Z","lastTransitionTime":"2025-11-25T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.697443 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.697475 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.697484 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.697496 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.697504 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:10Z","lastTransitionTime":"2025-11-25T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.758522 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:10 crc kubenswrapper[4482]: E1125 06:48:10.758607 4482 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:48:10 crc kubenswrapper[4482]: E1125 06:48:10.758642 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs podName:0a1c9846-2a7e-402e-985f-51a244241bd7 nodeName:}" failed. No retries permitted until 2025-11-25 06:48:42.758630816 +0000 UTC m=+97.246862076 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs") pod "network-metrics-daemon-2xhh4" (UID: "0a1c9846-2a7e-402e-985f-51a244241bd7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.799261 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.799290 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.799302 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.799312 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.799319 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:10Z","lastTransitionTime":"2025-11-25T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.830444 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:10 crc kubenswrapper[4482]: E1125 06:48:10.830611 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.901132 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.901195 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.901205 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.901218 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:10 crc kubenswrapper[4482]: I1125 06:48:10.901225 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:10Z","lastTransitionTime":"2025-11-25T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.003118 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.003148 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.003157 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.003198 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.003207 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:11Z","lastTransitionTime":"2025-11-25T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.105482 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.105506 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.105515 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.105525 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.105532 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:11Z","lastTransitionTime":"2025-11-25T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.207637 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.207660 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.207669 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.207697 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.207705 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:11Z","lastTransitionTime":"2025-11-25T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.310005 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.310031 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.310041 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.310067 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.310075 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:11Z","lastTransitionTime":"2025-11-25T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.411816 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.411847 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.411856 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.411868 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.411879 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:11Z","lastTransitionTime":"2025-11-25T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.513575 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.513598 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.513607 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.513617 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.513624 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:11Z","lastTransitionTime":"2025-11-25T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.615408 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.615432 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.615442 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.615450 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.615458 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:11Z","lastTransitionTime":"2025-11-25T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.717145 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.717199 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.717211 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.717225 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.717234 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:11Z","lastTransitionTime":"2025-11-25T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.819251 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.819278 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.819303 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.819316 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.819323 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:11Z","lastTransitionTime":"2025-11-25T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.830562 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.830605 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.830574 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:11 crc kubenswrapper[4482]: E1125 06:48:11.830680 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:11 crc kubenswrapper[4482]: E1125 06:48:11.830746 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:11 crc kubenswrapper[4482]: E1125 06:48:11.830848 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.920789 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.920918 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.920994 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.921065 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:11 crc kubenswrapper[4482]: I1125 06:48:11.921125 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:11Z","lastTransitionTime":"2025-11-25T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.023127 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.023157 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.023194 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.023206 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.023215 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:12Z","lastTransitionTime":"2025-11-25T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.108581 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b5qtx_2384eec7-0cd1-4bc5-9bc7-b5bb42607c37/kube-multus/0.log" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.108619 4482 generic.go:334] "Generic (PLEG): container finished" podID="2384eec7-0cd1-4bc5-9bc7-b5bb42607c37" containerID="c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7" exitCode=1 Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.108644 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b5qtx" event={"ID":"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37","Type":"ContainerDied","Data":"c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7"} Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.108925 4482 scope.go:117] "RemoveContainer" containerID="c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.118185 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.125271 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.125295 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.125303 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.125314 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.125324 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:12Z","lastTransitionTime":"2025-11-25T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.128350 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.138020 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.146622 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.156780 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:11Z\\\",\\\"message\\\":\\\"2025-11-25T06:47:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea\\\\n2025-11-25T06:47:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea to /host/opt/cni/bin/\\\\n2025-11-25T06:47:26Z [verbose] multus-daemon started\\\\n2025-11-25T06:47:26Z [verbose] Readiness Indicator file check\\\\n2025-11-25T06:48:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.169360 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:45Z\\\",\\\"message\\\":\\\"shift-dns/node-resolver-xk9c4\\\\nI1125 06:47:45.602488 6043 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1125 06:47:45.602506 6043 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-xk9c4 in node crc\\\\nI1125 06:47:45.602442 6043 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1125 06:47:45.602514 6043 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-xk9c4 after 0 failed attempt(s)\\\\nI1125 06:47:45.602725 6043 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-xk9c4\\\\nF1125 06:47:45.602506 6043 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.178925 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.188286 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d314f82-e6a3-44d6-b59b-b68552730866\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d5f730d9fc2cf67bca05c6b7ca8035f813d91a8ac6b069f70457b5a63e9d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://645d4b2d1e65d0d5b0e29914ac6e7ac26a91d65ad5ea42a309e983cf633e9fb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c736aa6a7231244785b8651eda784a6aa13f745d1e95a7d4963458ebe6647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.198520 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.206240 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.214019 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.221087 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.226623 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.226649 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.226660 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.226672 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.226680 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:12Z","lastTransitionTime":"2025-11-25T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.228182 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.234902 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.243749 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.253278 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.261328 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:12Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.331210 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.331270 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.331302 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.331316 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.331325 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:12Z","lastTransitionTime":"2025-11-25T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.433818 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.433913 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.433976 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.434044 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.434108 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:12Z","lastTransitionTime":"2025-11-25T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.536273 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.536295 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.536303 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.536315 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.536323 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:12Z","lastTransitionTime":"2025-11-25T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.637591 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.637614 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.637623 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.637632 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.637639 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:12Z","lastTransitionTime":"2025-11-25T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.739569 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.739675 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.739687 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.739700 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.739711 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:12Z","lastTransitionTime":"2025-11-25T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.830194 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:12 crc kubenswrapper[4482]: E1125 06:48:12.830297 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.841317 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.841339 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.841362 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.841375 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.841383 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:12Z","lastTransitionTime":"2025-11-25T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.943787 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.943814 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.943823 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.943832 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:12 crc kubenswrapper[4482]: I1125 06:48:12.943839 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:12Z","lastTransitionTime":"2025-11-25T06:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.045443 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.045467 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.045475 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.045490 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.045499 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:13Z","lastTransitionTime":"2025-11-25T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.112589 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b5qtx_2384eec7-0cd1-4bc5-9bc7-b5bb42607c37/kube-multus/0.log" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.112637 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b5qtx" event={"ID":"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37","Type":"ContainerStarted","Data":"898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16"} Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.122413 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.131158 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.140494 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.146543 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.146565 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.146572 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.146583 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.146591 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:13Z","lastTransitionTime":"2025-11-25T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.148509 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.156571 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:11Z\\\",\\\"message\\\":\\\"2025-11-25T06:47:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea\\\\n2025-11-25T06:47:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea to /host/opt/cni/bin/\\\\n2025-11-25T06:47:26Z [verbose] multus-daemon started\\\\n2025-11-25T06:47:26Z [verbose] Readiness Indicator file check\\\\n2025-11-25T06:48:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:48:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.180533 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:45Z\\\",\\\"message\\\":\\\"shift-dns/node-resolver-xk9c4\\\\nI1125 06:47:45.602488 6043 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1125 06:47:45.602506 6043 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-xk9c4 in node crc\\\\nI1125 06:47:45.602442 6043 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1125 06:47:45.602514 6043 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-xk9c4 after 0 failed attempt(s)\\\\nI1125 06:47:45.602725 6043 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-xk9c4\\\\nF1125 06:47:45.602506 6043 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.198573 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.218018 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d314f82-e6a3-44d6-b59b-b68552730866\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d5f730d9fc2cf67bca05c6b7ca8035f813d91a8ac6b069f70457b5a63e9d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://645d4b2d1e65d0d5b0e29914ac6e7ac26a91d65ad5ea42a309e983cf633e9fb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c736aa6a7231244785b8651eda784a6aa13f745d1e95a7d4963458ebe6647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.228978 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.235547 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.244013 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.248104 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.248130 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.248139 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.248200 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.248215 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:13Z","lastTransitionTime":"2025-11-25T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.251351 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.259280 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.265961 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.274901 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.281056 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.288435 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:13Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.349902 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.349932 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.349942 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.349955 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.349964 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:13Z","lastTransitionTime":"2025-11-25T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.451429 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.451456 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.451464 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.451475 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.451484 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:13Z","lastTransitionTime":"2025-11-25T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.552971 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.553012 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.553021 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.553035 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.553043 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:13Z","lastTransitionTime":"2025-11-25T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.654606 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.654654 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.654665 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.654677 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.654685 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:13Z","lastTransitionTime":"2025-11-25T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.755811 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.755854 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.755863 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.755872 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.755879 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:13Z","lastTransitionTime":"2025-11-25T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.830470 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.830483 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:13 crc kubenswrapper[4482]: E1125 06:48:13.830556 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.830467 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:13 crc kubenswrapper[4482]: E1125 06:48:13.830661 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:13 crc kubenswrapper[4482]: E1125 06:48:13.830771 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.858077 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.858116 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.858125 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.858136 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.858149 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:13Z","lastTransitionTime":"2025-11-25T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.960152 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.960200 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.960211 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.960222 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:13 crc kubenswrapper[4482]: I1125 06:48:13.960229 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:13Z","lastTransitionTime":"2025-11-25T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.061657 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.061679 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.061687 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.061698 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.061705 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:14Z","lastTransitionTime":"2025-11-25T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.163411 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.163464 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.163474 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.163487 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.163495 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:14Z","lastTransitionTime":"2025-11-25T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.265100 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.265126 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.265151 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.265160 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.265182 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:14Z","lastTransitionTime":"2025-11-25T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.366524 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.366546 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.366555 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.366565 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.366573 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:14Z","lastTransitionTime":"2025-11-25T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.467902 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.467928 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.467939 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.467951 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.467962 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:14Z","lastTransitionTime":"2025-11-25T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.569350 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.569381 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.569390 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.569411 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.569419 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:14Z","lastTransitionTime":"2025-11-25T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.671028 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.671061 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.671070 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.671081 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.671089 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:14Z","lastTransitionTime":"2025-11-25T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.772917 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.772945 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.772955 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.772965 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.772973 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:14Z","lastTransitionTime":"2025-11-25T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.829690 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:14 crc kubenswrapper[4482]: E1125 06:48:14.829794 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.830476 4482 scope.go:117] "RemoveContainer" containerID="9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.874532 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.874558 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.874567 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.874578 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.874586 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:14Z","lastTransitionTime":"2025-11-25T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.976041 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.976075 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.976085 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.976098 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:14 crc kubenswrapper[4482]: I1125 06:48:14.976107 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:14Z","lastTransitionTime":"2025-11-25T06:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.077547 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.077577 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.077586 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.077596 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.077605 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:15Z","lastTransitionTime":"2025-11-25T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.118214 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/2.log" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.122800 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerStarted","Data":"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab"} Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.123220 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.136499 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.146524 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.154002 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.162408 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.172107 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.179992 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.180021 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.180030 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.180043 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.180051 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:15Z","lastTransitionTime":"2025-11-25T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.180586 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.188547 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.197528 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.211376 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.219255 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.228928 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:11Z\\\",\\\"message\\\":\\\"2025-11-25T06:47:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea\\\\n2025-11-25T06:47:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea to /host/opt/cni/bin/\\\\n2025-11-25T06:47:26Z [verbose] multus-daemon started\\\\n2025-11-25T06:47:26Z [verbose] Readiness Indicator file check\\\\n2025-11-25T06:48:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:48:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.242008 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:45Z\\\",\\\"message\\\":\\\"shift-dns/node-resolver-xk9c4\\\\nI1125 06:47:45.602488 6043 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1125 06:47:45.602506 6043 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-xk9c4 in node crc\\\\nI1125 06:47:45.602442 6043 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1125 06:47:45.602514 6043 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-xk9c4 after 0 failed attempt(s)\\\\nI1125 06:47:45.602725 6043 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-xk9c4\\\\nF1125 06:47:45.602506 6043 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:48:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.249256 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.259654 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.270800 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d314f82-e6a3-44d6-b59b-b68552730866\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d5f730d9fc2cf67bca05c6b7ca8035f813d91a8ac6b069f70457b5a63e9d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://645d4b2d1e65d0d5b0e29914ac6e7ac26a91d65ad5ea42a309e983cf633e9fb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c736aa6a7231244785b8651eda784a6aa13f745d1e95a7d4963458ebe6647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.281382 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.286262 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.286299 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.286309 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.286358 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.286375 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:15Z","lastTransitionTime":"2025-11-25T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.293660 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.388629 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.388652 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.388660 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.388673 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.388682 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:15Z","lastTransitionTime":"2025-11-25T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.490924 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.490961 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.490969 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.490982 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.490991 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:15Z","lastTransitionTime":"2025-11-25T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.592699 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.592811 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.593002 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.593201 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.593369 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:15Z","lastTransitionTime":"2025-11-25T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.695493 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.695629 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.695713 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.695802 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.695876 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:15Z","lastTransitionTime":"2025-11-25T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.797992 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.798024 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.798035 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.798046 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.798055 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:15Z","lastTransitionTime":"2025-11-25T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.830266 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:15 crc kubenswrapper[4482]: E1125 06:48:15.830416 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.830330 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:15 crc kubenswrapper[4482]: E1125 06:48:15.830592 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.830295 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:15 crc kubenswrapper[4482]: E1125 06:48:15.830760 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.839502 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.847005 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.855913 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.864100 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.873197 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.882030 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.890995 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.899692 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.899719 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.899728 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.899738 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.899746 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:15Z","lastTransitionTime":"2025-11-25T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.903918 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:11Z\\\",\\\"message\\\":\\\"2025-11-25T06:47:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea\\\\n2025-11-25T06:47:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea to /host/opt/cni/bin/\\\\n2025-11-25T06:47:26Z [verbose] multus-daemon started\\\\n2025-11-25T06:47:26Z [verbose] Readiness Indicator file check\\\\n2025-11-25T06:48:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:48:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.916042 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:45Z\\\",\\\"message\\\":\\\"shift-dns/node-resolver-xk9c4\\\\nI1125 06:47:45.602488 6043 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1125 06:47:45.602506 6043 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-xk9c4 in node crc\\\\nI1125 06:47:45.602442 6043 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1125 06:47:45.602514 6043 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-xk9c4 after 0 failed attempt(s)\\\\nI1125 06:47:45.602725 6043 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-xk9c4\\\\nF1125 06:47:45.602506 6043 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:48:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.923127 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.931329 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.939449 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.947237 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.954208 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.962209 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.969602 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d314f82-e6a3-44d6-b59b-b68552730866\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d5f730d9fc2cf67bca05c6b7ca8035f813d91a8ac6b069f70457b5a63e9d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://645d4b2d1e65d0d5b0e29914ac6e7ac26a91d65ad5ea42a309e983cf633e9fb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c736aa6a7231244785b8651eda784a6aa13f745d1e95a7d4963458ebe6647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:15 crc kubenswrapper[4482]: I1125 06:48:15.979008 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.001460 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.001490 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.001515 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.001525 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.001535 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:16Z","lastTransitionTime":"2025-11-25T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.103160 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.103191 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.103199 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.103209 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.103217 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:16Z","lastTransitionTime":"2025-11-25T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.127245 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/3.log" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.128204 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/2.log" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.130544 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab" exitCode=1 Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.130574 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab"} Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.130600 4482 scope.go:117] "RemoveContainer" containerID="9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.131044 4482 scope.go:117] "RemoveContainer" containerID="2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab" Nov 25 06:48:16 crc kubenswrapper[4482]: E1125 06:48:16.131187 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.139821 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d314f82-e6a3-44d6-b59b-b68552730866\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d5f730d9fc2cf67bca05c6b7ca8035f813d91a8ac6b069f70457b5a63e9d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://645d4b2d1e65d0d5b0e29914ac6e7ac26a91d65ad5ea42a309e983cf633e9fb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c736aa6a7231244785b8651eda784a6aa13f745d1e95a7d4963458ebe6647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.149989 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.157024 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.165223 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.173182 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.181190 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.188212 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.195406 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.204683 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.205253 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.205275 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.205283 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.205298 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.205307 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:16Z","lastTransitionTime":"2025-11-25T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.211274 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.219335 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.227620 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.235199 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.243360 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:11Z\\\",\\\"message\\\":\\\"2025-11-25T06:47:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea\\\\n2025-11-25T06:47:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea to /host/opt/cni/bin/\\\\n2025-11-25T06:47:26Z [verbose] multus-daemon started\\\\n2025-11-25T06:47:26Z [verbose] Readiness Indicator file check\\\\n2025-11-25T06:48:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:48:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.255723 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c97332b363c2d00d51e74c413b81da75047ae08ec0f5e6b05f50debf389018f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:47:45Z\\\",\\\"message\\\":\\\"shift-dns/node-resolver-xk9c4\\\\nI1125 06:47:45.602488 6043 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI1125 06:47:45.602506 6043 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-xk9c4 in node crc\\\\nI1125 06:47:45.602442 6043 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1125 06:47:45.602514 6043 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-xk9c4 after 0 failed attempt(s)\\\\nI1125 06:47:45.602725 6043 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-xk9c4\\\\nF1125 06:47:45.602506 6043 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:97\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:15Z\\\",\\\"message\\\":\\\"rol-plane-749d76644c-qpxjn\\\\nI1125 06:48:15.469567 6422 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn in node crc\\\\nI1125 06:48:15.469572 6422 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn after 0 failed attempt(s)\\\\nI1125 06:48:15.469581 6422 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn\\\\nF1125 06:48:15.469587 6422 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:48:15.4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:48:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.263148 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.272059 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:16Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.306800 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.306828 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.306837 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.306846 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.306854 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:16Z","lastTransitionTime":"2025-11-25T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.408703 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.408726 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.408735 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.408747 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.408758 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:16Z","lastTransitionTime":"2025-11-25T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.510113 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.510134 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.510141 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.510150 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.510157 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:16Z","lastTransitionTime":"2025-11-25T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.611759 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.611787 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.611795 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.611804 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.611811 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:16Z","lastTransitionTime":"2025-11-25T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.713125 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.713151 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.713160 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.713193 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.713202 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:16Z","lastTransitionTime":"2025-11-25T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.815135 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.815202 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.815215 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.815227 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.815234 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:16Z","lastTransitionTime":"2025-11-25T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.830341 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:16 crc kubenswrapper[4482]: E1125 06:48:16.830474 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.917414 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.918159 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.918245 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.918327 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:16 crc kubenswrapper[4482]: I1125 06:48:16.918592 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:16Z","lastTransitionTime":"2025-11-25T06:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.021478 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.021933 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.022024 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.022123 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.022204 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.124938 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.125266 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.125348 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.125448 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.125510 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.135235 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/3.log" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.137984 4482 scope.go:117] "RemoveContainer" containerID="2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab" Nov 25 06:48:17 crc kubenswrapper[4482]: E1125 06:48:17.138110 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.146665 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.155008 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.162603 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.169596 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.178558 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.184978 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.193554 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.200779 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.210287 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.218468 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.227015 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.228340 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.228571 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.228676 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.228787 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.230290 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.238997 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:11Z\\\",\\\"message\\\":\\\"2025-11-25T06:47:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea\\\\n2025-11-25T06:47:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea to /host/opt/cni/bin/\\\\n2025-11-25T06:47:26Z [verbose] multus-daemon started\\\\n2025-11-25T06:47:26Z [verbose] Readiness Indicator file check\\\\n2025-11-25T06:48:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:48:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.250976 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:15Z\\\",\\\"message\\\":\\\"rol-plane-749d76644c-qpxjn\\\\nI1125 06:48:15.469567 6422 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn in node crc\\\\nI1125 06:48:15.469572 6422 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn after 0 failed attempt(s)\\\\nI1125 06:48:15.469581 6422 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn\\\\nF1125 06:48:15.469587 6422 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:48:15.4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:48:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.259962 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.268060 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d314f82-e6a3-44d6-b59b-b68552730866\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d5f730d9fc2cf67bca05c6b7ca8035f813d91a8ac6b069f70457b5a63e9d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://645d4b2d1e65d0d5b0e29914ac6e7ac26a91d65ad5ea42a309e983cf633e9fb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c736aa6a7231244785b8651eda784a6aa13f745d1e95a7d4963458ebe6647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.278008 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.284864 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.332019 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.332048 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.332058 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.332072 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.332082 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.433598 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.433622 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.433631 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.433644 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.433652 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.535895 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.535917 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.535924 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.535932 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.535939 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.629931 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.629972 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.629983 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.629999 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.630011 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: E1125 06:48:17.639545 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.641976 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.642008 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.642016 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.642030 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.642039 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: E1125 06:48:17.649966 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.652059 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.652082 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.652092 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.652101 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.652109 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: E1125 06:48:17.660639 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.662898 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.662920 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.662928 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.662941 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.662950 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: E1125 06:48:17.670708 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.672655 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.672682 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.672690 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.672700 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.672707 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: E1125 06:48:17.680110 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:17Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:17 crc kubenswrapper[4482]: E1125 06:48:17.680227 4482 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.681161 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.681194 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.681202 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.681212 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.681219 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.782720 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.782760 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.782794 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.782805 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.782813 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.830566 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:17 crc kubenswrapper[4482]: E1125 06:48:17.830643 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.830763 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:17 crc kubenswrapper[4482]: E1125 06:48:17.830823 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.830998 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:17 crc kubenswrapper[4482]: E1125 06:48:17.831047 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.884753 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.884801 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.884810 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.884823 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.884832 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.986050 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.986076 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.986086 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.986095 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:17 crc kubenswrapper[4482]: I1125 06:48:17.986103 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:17Z","lastTransitionTime":"2025-11-25T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.087721 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.087750 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.087760 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.087770 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.087792 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:18Z","lastTransitionTime":"2025-11-25T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.188957 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.188973 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.188981 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.188989 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.188996 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:18Z","lastTransitionTime":"2025-11-25T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.290543 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.290585 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.290594 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.290603 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.290610 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:18Z","lastTransitionTime":"2025-11-25T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.393833 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.393869 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.393878 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.393891 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.393900 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:18Z","lastTransitionTime":"2025-11-25T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.495882 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.495925 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.495934 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.495944 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.495953 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:18Z","lastTransitionTime":"2025-11-25T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.598078 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.598106 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.598131 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.598142 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.598149 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:18Z","lastTransitionTime":"2025-11-25T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.699872 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.699922 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.699931 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.699941 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.699949 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:18Z","lastTransitionTime":"2025-11-25T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.801431 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.801470 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.801479 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.801492 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.801502 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:18Z","lastTransitionTime":"2025-11-25T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.830406 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:18 crc kubenswrapper[4482]: E1125 06:48:18.830499 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.903471 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.903626 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.903685 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.903742 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:18 crc kubenswrapper[4482]: I1125 06:48:18.903809 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:18Z","lastTransitionTime":"2025-11-25T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.005089 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.005136 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.005146 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.005156 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.005163 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:19Z","lastTransitionTime":"2025-11-25T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.107051 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.107078 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.107087 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.107097 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.107107 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:19Z","lastTransitionTime":"2025-11-25T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.208706 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.208738 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.208749 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.208762 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.208771 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:19Z","lastTransitionTime":"2025-11-25T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.310836 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.310859 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.310868 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.310878 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.310886 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:19Z","lastTransitionTime":"2025-11-25T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.412663 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.412688 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.412698 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.412707 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.412714 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:19Z","lastTransitionTime":"2025-11-25T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.513981 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.514007 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.514015 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.514027 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.514035 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:19Z","lastTransitionTime":"2025-11-25T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.615937 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.615963 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.615972 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.615982 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.615990 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:19Z","lastTransitionTime":"2025-11-25T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.717905 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.717930 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.717940 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.717950 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.717957 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:19Z","lastTransitionTime":"2025-11-25T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.820033 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.820070 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.820085 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.820102 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.820114 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:19Z","lastTransitionTime":"2025-11-25T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.830562 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.830626 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:19 crc kubenswrapper[4482]: E1125 06:48:19.830701 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.830722 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:19 crc kubenswrapper[4482]: E1125 06:48:19.830795 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:19 crc kubenswrapper[4482]: E1125 06:48:19.830837 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.921502 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.921556 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.921566 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.921577 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:19 crc kubenswrapper[4482]: I1125 06:48:19.921587 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:19Z","lastTransitionTime":"2025-11-25T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.023270 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.023299 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.023308 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.023318 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.023327 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:20Z","lastTransitionTime":"2025-11-25T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.125699 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.125728 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.125737 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.125748 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.125756 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:20Z","lastTransitionTime":"2025-11-25T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.227848 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.227876 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.227885 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.227896 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.227907 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:20Z","lastTransitionTime":"2025-11-25T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.329577 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.329645 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.329656 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.329670 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.329679 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:20Z","lastTransitionTime":"2025-11-25T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.431727 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.431749 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.431756 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.431765 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.431772 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:20Z","lastTransitionTime":"2025-11-25T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.533878 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.533903 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.533931 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.533941 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.533949 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:20Z","lastTransitionTime":"2025-11-25T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.635974 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.636012 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.636021 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.636035 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.636045 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:20Z","lastTransitionTime":"2025-11-25T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.737437 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.737464 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.737473 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.737483 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.737507 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:20Z","lastTransitionTime":"2025-11-25T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.830240 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:20 crc kubenswrapper[4482]: E1125 06:48:20.830328 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.838945 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.838996 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.839009 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.839026 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.839041 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:20Z","lastTransitionTime":"2025-11-25T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.940687 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.940714 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.940723 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.940732 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:20 crc kubenswrapper[4482]: I1125 06:48:20.940740 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:20Z","lastTransitionTime":"2025-11-25T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.041911 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.041977 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.041988 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.041997 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.042005 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:21Z","lastTransitionTime":"2025-11-25T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.143415 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.143468 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.143478 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.143491 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.143502 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:21Z","lastTransitionTime":"2025-11-25T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.245197 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.245241 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.245250 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.245263 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.245275 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:21Z","lastTransitionTime":"2025-11-25T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.347285 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.347316 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.347332 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.347344 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.347353 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:21Z","lastTransitionTime":"2025-11-25T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.449010 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.449042 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.449054 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.449102 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.449115 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:21Z","lastTransitionTime":"2025-11-25T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.550906 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.550944 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.550953 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.550967 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.550975 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:21Z","lastTransitionTime":"2025-11-25T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.652612 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.652645 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.652653 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.652666 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.652676 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:21Z","lastTransitionTime":"2025-11-25T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.753830 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.753864 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.753873 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.753884 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.753893 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:21Z","lastTransitionTime":"2025-11-25T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.830149 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.830251 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:21 crc kubenswrapper[4482]: E1125 06:48:21.830392 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.830451 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:21 crc kubenswrapper[4482]: E1125 06:48:21.830487 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:21 crc kubenswrapper[4482]: E1125 06:48:21.830557 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.856072 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.856097 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.856105 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.856115 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.856124 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:21Z","lastTransitionTime":"2025-11-25T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.957877 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.957917 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.957926 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.957940 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:21 crc kubenswrapper[4482]: I1125 06:48:21.957949 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:21Z","lastTransitionTime":"2025-11-25T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.059639 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.059667 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.059694 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.059706 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.059717 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:22Z","lastTransitionTime":"2025-11-25T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.161362 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.161381 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.161389 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.161400 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.161407 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:22Z","lastTransitionTime":"2025-11-25T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.263161 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.263215 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.263227 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.263236 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.263244 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:22Z","lastTransitionTime":"2025-11-25T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.365101 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.365126 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.365133 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.365143 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.365150 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:22Z","lastTransitionTime":"2025-11-25T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.466935 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.466962 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.466971 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.466982 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.466990 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:22Z","lastTransitionTime":"2025-11-25T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.568877 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.568918 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.568926 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.568936 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.568943 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:22Z","lastTransitionTime":"2025-11-25T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.670654 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.670677 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.670685 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.670694 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.670701 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:22Z","lastTransitionTime":"2025-11-25T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.772875 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.772897 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.772905 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.772919 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.772928 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:22Z","lastTransitionTime":"2025-11-25T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.830098 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:22 crc kubenswrapper[4482]: E1125 06:48:22.830206 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.874998 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.875042 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.875051 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.875061 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.875069 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:22Z","lastTransitionTime":"2025-11-25T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.976793 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.976850 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.976860 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.976873 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:22 crc kubenswrapper[4482]: I1125 06:48:22.976879 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:22Z","lastTransitionTime":"2025-11-25T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.079051 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.079074 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.079082 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.079092 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.079102 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:23Z","lastTransitionTime":"2025-11-25T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.180793 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.180818 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.180826 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.180835 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.180842 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:23Z","lastTransitionTime":"2025-11-25T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.282596 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.282625 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.282633 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.282643 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.282651 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:23Z","lastTransitionTime":"2025-11-25T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.384508 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.384608 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.384619 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.384628 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.384634 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:23Z","lastTransitionTime":"2025-11-25T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.486454 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.486481 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.486489 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.486500 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.486508 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:23Z","lastTransitionTime":"2025-11-25T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.588623 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.588656 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.588664 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.588676 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.588685 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:23Z","lastTransitionTime":"2025-11-25T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.691364 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.691399 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.691408 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.691421 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.691429 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:23Z","lastTransitionTime":"2025-11-25T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.793295 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.793339 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.793349 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.793363 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.793371 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:23Z","lastTransitionTime":"2025-11-25T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.830288 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.830454 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:23 crc kubenswrapper[4482]: E1125 06:48:23.830573 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.830586 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:23 crc kubenswrapper[4482]: E1125 06:48:23.830646 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:23 crc kubenswrapper[4482]: E1125 06:48:23.830703 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.895552 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.895602 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.895612 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.895622 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.895631 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:23Z","lastTransitionTime":"2025-11-25T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.997472 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.997504 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.997515 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.997528 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:23 crc kubenswrapper[4482]: I1125 06:48:23.997536 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:23Z","lastTransitionTime":"2025-11-25T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.098910 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.098930 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.098937 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.098947 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.098954 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:24Z","lastTransitionTime":"2025-11-25T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.199978 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.200020 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.200029 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.200039 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.200046 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:24Z","lastTransitionTime":"2025-11-25T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.301411 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.301452 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.301463 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.301473 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.301480 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:24Z","lastTransitionTime":"2025-11-25T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.403525 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.403553 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.403563 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.403573 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.403580 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:24Z","lastTransitionTime":"2025-11-25T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.505369 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.505519 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.505667 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.505826 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.505966 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:24Z","lastTransitionTime":"2025-11-25T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.608369 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.608393 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.608401 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.608413 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.608422 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:24Z","lastTransitionTime":"2025-11-25T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.710125 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.710151 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.710160 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.710184 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.710192 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:24Z","lastTransitionTime":"2025-11-25T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.812261 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.812480 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.812651 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.812787 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.812929 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:24Z","lastTransitionTime":"2025-11-25T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.830529 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:24 crc kubenswrapper[4482]: E1125 06:48:24.830863 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.839218 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.915095 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.915117 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.915124 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.915136 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:24 crc kubenswrapper[4482]: I1125 06:48:24.915144 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:24Z","lastTransitionTime":"2025-11-25T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.017073 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.017257 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.017319 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.017388 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.017443 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:25Z","lastTransitionTime":"2025-11-25T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.119483 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.119515 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.119523 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.119536 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.119545 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:25Z","lastTransitionTime":"2025-11-25T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.220711 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.220743 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.220752 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.220763 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.220771 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:25Z","lastTransitionTime":"2025-11-25T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.322082 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.322117 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.322127 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.322139 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.322149 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:25Z","lastTransitionTime":"2025-11-25T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.423617 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.423662 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.423670 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.423681 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.423688 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:25Z","lastTransitionTime":"2025-11-25T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.525438 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.525468 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.525476 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.525486 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.525494 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:25Z","lastTransitionTime":"2025-11-25T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.627016 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.627064 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.627078 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.627094 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.627105 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:25Z","lastTransitionTime":"2025-11-25T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.729040 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.729067 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.729075 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.729087 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.729096 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:25Z","lastTransitionTime":"2025-11-25T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.829757 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.829880 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.829901 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:25 crc kubenswrapper[4482]: E1125 06:48:25.830129 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:25 crc kubenswrapper[4482]: E1125 06:48:25.830234 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:25 crc kubenswrapper[4482]: E1125 06:48:25.830296 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.831549 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.831649 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.831659 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.831671 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.831683 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:25Z","lastTransitionTime":"2025-11-25T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.841787 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.849194 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d314f82-e6a3-44d6-b59b-b68552730866\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d5f730d9fc2cf67bca05c6b7ca8035f813d91a8ac6b069f70457b5a63e9d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://645d4b2d1e65d0d5b0e29914ac6e7ac26a91d65ad5ea42a309e983cf633e9fb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c736aa6a7231244785b8651eda784a6aa13f745d1e95a7d4963458ebe6647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.859700 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.866360 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.873068 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.879876 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c6d24f-5701-4ec0-a0fe-c04ff96666d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afabe0c26cf96847b662a1236a8d5f22205769282690735780ef24580c394cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac50938bda83c23f2391068a14a8c5f84554f1181814baf540b75713d7aa7493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac50938bda83c23f2391068a14a8c5f84554f1181814baf540b75713d7aa7493\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.887703 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.900146 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.908020 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.917009 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.923628 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.931119 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.933199 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.933232 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.933241 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.933252 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.933261 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:25Z","lastTransitionTime":"2025-11-25T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.943187 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:15Z\\\",\\\"message\\\":\\\"rol-plane-749d76644c-qpxjn\\\\nI1125 06:48:15.469567 6422 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn in node crc\\\\nI1125 06:48:15.469572 6422 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn after 0 failed attempt(s)\\\\nI1125 06:48:15.469581 6422 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn\\\\nF1125 06:48:15.469587 6422 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:48:15.4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:48:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.950625 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.958332 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.966430 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.973939 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:25 crc kubenswrapper[4482]: I1125 06:48:25.981742 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:11Z\\\",\\\"message\\\":\\\"2025-11-25T06:47:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea\\\\n2025-11-25T06:47:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea to /host/opt/cni/bin/\\\\n2025-11-25T06:47:26Z [verbose] multus-daemon started\\\\n2025-11-25T06:47:26Z [verbose] Readiness Indicator file check\\\\n2025-11-25T06:48:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:48:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:25Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.034985 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.035014 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.035023 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.035064 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.035073 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:26Z","lastTransitionTime":"2025-11-25T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.137363 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.137396 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.137404 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.137418 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.137426 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:26Z","lastTransitionTime":"2025-11-25T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.238932 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.238963 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.238971 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.238983 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.238993 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:26Z","lastTransitionTime":"2025-11-25T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.340940 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.340967 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.340977 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.340987 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.340994 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:26Z","lastTransitionTime":"2025-11-25T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.442906 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.442944 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.442953 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.442967 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.442975 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:26Z","lastTransitionTime":"2025-11-25T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.545183 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.545212 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.545220 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.545231 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.545240 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:26Z","lastTransitionTime":"2025-11-25T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.647078 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.647115 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.647124 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.647137 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.647146 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:26Z","lastTransitionTime":"2025-11-25T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.752888 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.752921 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.752931 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.752942 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.752950 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:26Z","lastTransitionTime":"2025-11-25T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.830482 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:26 crc kubenswrapper[4482]: E1125 06:48:26.830568 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.855040 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.855077 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.855086 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.855099 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.855107 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:26Z","lastTransitionTime":"2025-11-25T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.956359 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.956389 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.956398 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.956408 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:26 crc kubenswrapper[4482]: I1125 06:48:26.956417 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:26Z","lastTransitionTime":"2025-11-25T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.058321 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.058345 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.058353 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.058364 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.058372 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.159393 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.159422 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.159431 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.159464 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.159480 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.261527 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.261551 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.261561 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.261572 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.261580 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.363494 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.363517 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.363525 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.363534 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.363541 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.465231 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.465254 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.465320 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.465332 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.465339 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.566772 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.566817 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.566826 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.566839 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.566848 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.668708 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.668733 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.668742 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.668752 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.668759 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.693223 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.693404 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:31.693387471 +0000 UTC m=+146.181618720 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.770758 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.770787 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.770806 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.770819 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.770826 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.794443 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.794478 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.794494 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.794509 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.794557 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.794578 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.794591 4482 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.794609 4482 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.794642 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 06:49:31.794628194 +0000 UTC m=+146.282859463 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.794656 4482 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.794659 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:49:31.794651298 +0000 UTC m=+146.282882567 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.794673 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.794699 4482 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.794707 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 06:49:31.794677017 +0000 UTC m=+146.282908275 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.794714 4482 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.794775 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 06:49:31.79475868 +0000 UTC m=+146.282989949 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.830650 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.830730 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.830780 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.830813 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.830887 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.830965 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.872474 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.872492 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.872500 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.872509 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.872515 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.876372 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.876398 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.876407 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.876418 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.876427 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.885029 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.887371 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.887398 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.887409 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.887421 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.887429 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.894892 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.897018 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.897057 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.897068 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.897077 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.897083 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.905090 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.907322 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.907348 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.907357 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.907367 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.907374 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.915131 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.917437 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.917478 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.917489 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.917498 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.917505 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.925582 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:27Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:27 crc kubenswrapper[4482]: E1125 06:48:27.925690 4482 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.974527 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.974556 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.974565 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.974577 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:27 crc kubenswrapper[4482]: I1125 06:48:27.974585 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:27Z","lastTransitionTime":"2025-11-25T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.076013 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.076047 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.076055 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.076065 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.076075 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:28Z","lastTransitionTime":"2025-11-25T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.177940 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.177963 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.177971 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.177980 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.177987 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:28Z","lastTransitionTime":"2025-11-25T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.280084 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.280108 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.280116 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.280127 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.280133 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:28Z","lastTransitionTime":"2025-11-25T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.381832 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.381862 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.381871 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.381885 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.381893 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:28Z","lastTransitionTime":"2025-11-25T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.483832 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.483857 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.483866 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.483877 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.483885 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:28Z","lastTransitionTime":"2025-11-25T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.586376 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.586409 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.586417 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.586429 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.586439 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:28Z","lastTransitionTime":"2025-11-25T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.688429 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.688479 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.688487 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.688502 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.688511 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:28Z","lastTransitionTime":"2025-11-25T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.790738 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.790772 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.790781 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.790795 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.790819 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:28Z","lastTransitionTime":"2025-11-25T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.830315 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:28 crc kubenswrapper[4482]: E1125 06:48:28.830446 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.893284 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.893310 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.893321 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.893330 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.893338 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:28Z","lastTransitionTime":"2025-11-25T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.995586 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.995616 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.995624 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.995653 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:28 crc kubenswrapper[4482]: I1125 06:48:28.995661 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:28Z","lastTransitionTime":"2025-11-25T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.097825 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.098005 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.098078 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.098152 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.098242 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:29Z","lastTransitionTime":"2025-11-25T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.200400 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.200437 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.200446 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.200459 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.200469 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:29Z","lastTransitionTime":"2025-11-25T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.301724 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.301756 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.301765 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.301776 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.301784 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:29Z","lastTransitionTime":"2025-11-25T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.403638 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.403672 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.403680 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.403693 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.403701 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:29Z","lastTransitionTime":"2025-11-25T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.505218 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.505252 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.505260 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.505271 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.505279 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:29Z","lastTransitionTime":"2025-11-25T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.607099 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.607127 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.607134 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.607146 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.607153 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:29Z","lastTransitionTime":"2025-11-25T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.708786 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.708839 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.708850 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.708873 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.708883 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:29Z","lastTransitionTime":"2025-11-25T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.810120 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.810147 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.810157 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.810180 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.810189 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:29Z","lastTransitionTime":"2025-11-25T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.830367 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:29 crc kubenswrapper[4482]: E1125 06:48:29.830441 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.830367 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.830476 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:29 crc kubenswrapper[4482]: E1125 06:48:29.830502 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:29 crc kubenswrapper[4482]: E1125 06:48:29.830562 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.831028 4482 scope.go:117] "RemoveContainer" containerID="2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab" Nov 25 06:48:29 crc kubenswrapper[4482]: E1125 06:48:29.831146 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.911294 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.911318 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.911325 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.911334 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:29 crc kubenswrapper[4482]: I1125 06:48:29.911341 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:29Z","lastTransitionTime":"2025-11-25T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.013051 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.013075 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.013082 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.013091 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.013113 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:30Z","lastTransitionTime":"2025-11-25T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.115277 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.115310 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.115318 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.115329 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.115337 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:30Z","lastTransitionTime":"2025-11-25T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.217679 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.217735 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.217748 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.217767 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.217781 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:30Z","lastTransitionTime":"2025-11-25T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.320130 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.320181 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.320190 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.320203 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.320210 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:30Z","lastTransitionTime":"2025-11-25T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.422313 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.422347 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.422355 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.422368 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.422377 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:30Z","lastTransitionTime":"2025-11-25T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.524389 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.524427 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.524435 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.524448 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.524456 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:30Z","lastTransitionTime":"2025-11-25T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.627075 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.627106 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.627114 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.627126 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.627135 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:30Z","lastTransitionTime":"2025-11-25T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.729219 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.729251 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.729259 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.729271 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.729278 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:30Z","lastTransitionTime":"2025-11-25T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.830135 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:30 crc kubenswrapper[4482]: E1125 06:48:30.830269 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.831629 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.831652 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.831660 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.831670 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.831677 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:30Z","lastTransitionTime":"2025-11-25T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.933531 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.933555 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.933562 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.933571 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:30 crc kubenswrapper[4482]: I1125 06:48:30.933577 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:30Z","lastTransitionTime":"2025-11-25T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.034707 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.034729 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.034737 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.034745 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.034753 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:31Z","lastTransitionTime":"2025-11-25T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.135716 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.135737 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.135746 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.135754 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.135761 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:31Z","lastTransitionTime":"2025-11-25T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.237209 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.237231 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.237238 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.237247 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.237254 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:31Z","lastTransitionTime":"2025-11-25T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.338991 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.339012 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.339019 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.339028 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.339035 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:31Z","lastTransitionTime":"2025-11-25T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.440653 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.440681 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.440691 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.440702 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.440711 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:31Z","lastTransitionTime":"2025-11-25T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.542256 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.542276 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.542283 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.542292 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.542298 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:31Z","lastTransitionTime":"2025-11-25T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.643594 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.643615 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.643622 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.643630 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.643636 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:31Z","lastTransitionTime":"2025-11-25T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.745458 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.745485 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.745493 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.745502 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.745509 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:31Z","lastTransitionTime":"2025-11-25T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.830605 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.830622 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:31 crc kubenswrapper[4482]: E1125 06:48:31.830693 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.830720 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:31 crc kubenswrapper[4482]: E1125 06:48:31.830776 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:31 crc kubenswrapper[4482]: E1125 06:48:31.830837 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.847331 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.847354 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.847363 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.847373 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.847380 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:31Z","lastTransitionTime":"2025-11-25T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.950015 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.950050 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.950061 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.950073 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:31 crc kubenswrapper[4482]: I1125 06:48:31.950082 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:31Z","lastTransitionTime":"2025-11-25T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.051652 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.051684 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.051692 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.051703 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.051710 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:32Z","lastTransitionTime":"2025-11-25T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.153758 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.153783 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.153790 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.153802 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.153822 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:32Z","lastTransitionTime":"2025-11-25T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.255305 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.255336 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.255345 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.255355 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.255363 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:32Z","lastTransitionTime":"2025-11-25T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.357600 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.357626 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.357633 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.357644 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.357651 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:32Z","lastTransitionTime":"2025-11-25T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.459563 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.459587 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.459595 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.459605 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.459612 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:32Z","lastTransitionTime":"2025-11-25T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.563403 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.563431 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.563438 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.563449 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.563457 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:32Z","lastTransitionTime":"2025-11-25T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.665377 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.665401 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.665408 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.665418 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.665425 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:32Z","lastTransitionTime":"2025-11-25T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.766487 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.766516 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.766526 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.766536 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.766544 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:32Z","lastTransitionTime":"2025-11-25T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.830513 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:32 crc kubenswrapper[4482]: E1125 06:48:32.830686 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.867698 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.867720 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.867728 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.867736 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.867743 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:32Z","lastTransitionTime":"2025-11-25T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.969524 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.969542 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.969550 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.969558 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:32 crc kubenswrapper[4482]: I1125 06:48:32.969564 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:32Z","lastTransitionTime":"2025-11-25T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.070961 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.070984 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.070994 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.071004 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.071012 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:33Z","lastTransitionTime":"2025-11-25T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.171993 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.172020 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.172029 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.172040 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.172047 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:33Z","lastTransitionTime":"2025-11-25T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.273455 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.273480 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.273487 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.273496 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.273503 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:33Z","lastTransitionTime":"2025-11-25T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.375196 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.375214 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.375221 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.375228 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.375234 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:33Z","lastTransitionTime":"2025-11-25T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.476692 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.476715 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.476723 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.476731 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.476738 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:33Z","lastTransitionTime":"2025-11-25T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.578612 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.578636 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.578644 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.578653 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.578659 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:33Z","lastTransitionTime":"2025-11-25T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.680408 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.680580 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.680660 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.680721 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.680777 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:33Z","lastTransitionTime":"2025-11-25T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.783338 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.783370 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.783380 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.783392 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.783399 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:33Z","lastTransitionTime":"2025-11-25T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.829956 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.830022 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:33 crc kubenswrapper[4482]: E1125 06:48:33.830049 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:33 crc kubenswrapper[4482]: E1125 06:48:33.830127 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.830259 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:33 crc kubenswrapper[4482]: E1125 06:48:33.830325 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.884894 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.884924 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.884932 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.884942 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.884951 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:33Z","lastTransitionTime":"2025-11-25T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.986644 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.986675 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.986685 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.986698 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:33 crc kubenswrapper[4482]: I1125 06:48:33.986707 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:33Z","lastTransitionTime":"2025-11-25T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.088716 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.088744 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.088753 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.088766 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.088774 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:34Z","lastTransitionTime":"2025-11-25T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.190484 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.190522 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.190530 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.190542 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.190551 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:34Z","lastTransitionTime":"2025-11-25T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.292503 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.292962 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.293030 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.293095 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.293148 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:34Z","lastTransitionTime":"2025-11-25T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.395521 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.395715 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.395771 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.395848 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.395902 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:34Z","lastTransitionTime":"2025-11-25T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.497858 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.497880 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.497887 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.497897 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.497903 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:34Z","lastTransitionTime":"2025-11-25T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.599602 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.599621 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.599628 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.599636 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.599643 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:34Z","lastTransitionTime":"2025-11-25T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.701643 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.701666 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.701674 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.701683 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.701690 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:34Z","lastTransitionTime":"2025-11-25T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.803484 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.803600 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.803669 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.803731 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.803788 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:34Z","lastTransitionTime":"2025-11-25T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.829856 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:34 crc kubenswrapper[4482]: E1125 06:48:34.830015 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.905340 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.905367 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.905375 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.905384 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:34 crc kubenswrapper[4482]: I1125 06:48:34.905391 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:34Z","lastTransitionTime":"2025-11-25T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.007055 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.007078 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.007085 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.007094 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.007101 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:35Z","lastTransitionTime":"2025-11-25T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.108721 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.108754 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.108764 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.108777 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.108785 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:35Z","lastTransitionTime":"2025-11-25T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.209871 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.209897 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.209905 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.209913 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.209921 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:35Z","lastTransitionTime":"2025-11-25T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.311102 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.311128 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.311136 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.311145 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.311151 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:35Z","lastTransitionTime":"2025-11-25T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.412784 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.412804 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.412811 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.412830 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.412836 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:35Z","lastTransitionTime":"2025-11-25T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.514141 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.514162 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.514184 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.514194 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.514200 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:35Z","lastTransitionTime":"2025-11-25T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.616022 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.616043 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.616050 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.616058 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.616064 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:35Z","lastTransitionTime":"2025-11-25T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.718096 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.718118 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.718126 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.718134 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.718141 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:35Z","lastTransitionTime":"2025-11-25T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.819259 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.819290 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.819302 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.819312 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.819321 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:35Z","lastTransitionTime":"2025-11-25T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.830511 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.830527 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:35 crc kubenswrapper[4482]: E1125 06:48:35.830596 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.830634 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:35 crc kubenswrapper[4482]: E1125 06:48:35.830696 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:35 crc kubenswrapper[4482]: E1125 06:48:35.830809 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.840324 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.847314 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d314f82-e6a3-44d6-b59b-b68552730866\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d5f730d9fc2cf67bca05c6b7ca8035f813d91a8ac6b069f70457b5a63e9d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://645d4b2d1e65d0d5b0e29914ac6e7ac26a91d65ad5ea42a309e983cf633e9fb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c736aa6a7231244785b8651eda784a6aa13f745d1e95a7d4963458ebe6647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.855723 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.862316 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.868971 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c6d24f-5701-4ec0-a0fe-c04ff96666d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afabe0c26cf96847b662a1236a8d5f22205769282690735780ef24580c394cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac50938bda83c23f2391068a14a8c5f84554f1181814baf540b75713d7aa7493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac50938bda83c23f2391068a14a8c5f84554f1181814baf540b75713d7aa7493\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.876393 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.883099 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.889864 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.898028 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.906480 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.912471 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.919499 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.920299 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.920322 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.920331 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.920342 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.920349 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:35Z","lastTransitionTime":"2025-11-25T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.927028 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.935076 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.942102 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.949465 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:11Z\\\",\\\"message\\\":\\\"2025-11-25T06:47:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea\\\\n2025-11-25T06:47:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea to /host/opt/cni/bin/\\\\n2025-11-25T06:47:26Z [verbose] multus-daemon started\\\\n2025-11-25T06:47:26Z [verbose] Readiness Indicator file check\\\\n2025-11-25T06:48:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:48:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.962967 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:15Z\\\",\\\"message\\\":\\\"rol-plane-749d76644c-qpxjn\\\\nI1125 06:48:15.469567 6422 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn in node crc\\\\nI1125 06:48:15.469572 6422 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn after 0 failed attempt(s)\\\\nI1125 06:48:15.469581 6422 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn\\\\nF1125 06:48:15.469587 6422 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:48:15.4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:48:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:35 crc kubenswrapper[4482]: I1125 06:48:35.969825 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:35Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.022458 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.022576 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.022643 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.022710 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.022773 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:36Z","lastTransitionTime":"2025-11-25T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.124625 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.124651 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.124659 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.124669 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.124675 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:36Z","lastTransitionTime":"2025-11-25T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.226100 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.226145 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.226157 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.226201 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.226216 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:36Z","lastTransitionTime":"2025-11-25T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.328001 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.328031 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.328040 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.328052 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.328061 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:36Z","lastTransitionTime":"2025-11-25T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.429702 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.429752 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.429762 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.429778 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.429786 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:36Z","lastTransitionTime":"2025-11-25T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.531401 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.531433 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.531441 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.531452 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.531460 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:36Z","lastTransitionTime":"2025-11-25T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.633546 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.633597 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.633607 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.633619 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.633627 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:36Z","lastTransitionTime":"2025-11-25T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.735682 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.735722 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.735733 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.735748 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.735756 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:36Z","lastTransitionTime":"2025-11-25T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.830650 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:36 crc kubenswrapper[4482]: E1125 06:48:36.830747 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.838132 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.838159 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.838183 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.838196 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.838205 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:36Z","lastTransitionTime":"2025-11-25T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.940285 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.940325 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.940335 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.940349 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:36 crc kubenswrapper[4482]: I1125 06:48:36.940359 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:36Z","lastTransitionTime":"2025-11-25T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.041983 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.042010 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.042019 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.042030 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.042038 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:37Z","lastTransitionTime":"2025-11-25T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.144077 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.144125 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.144134 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.144145 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.144152 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:37Z","lastTransitionTime":"2025-11-25T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.245666 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.245700 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.245709 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.245721 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.245729 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:37Z","lastTransitionTime":"2025-11-25T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.347833 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.347858 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.347865 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.347874 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.347881 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:37Z","lastTransitionTime":"2025-11-25T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.449560 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.449598 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.449607 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.449637 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.449647 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:37Z","lastTransitionTime":"2025-11-25T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.551846 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.551885 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.551894 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.551906 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.551932 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:37Z","lastTransitionTime":"2025-11-25T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.653658 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.653701 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.653711 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.653722 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.653733 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:37Z","lastTransitionTime":"2025-11-25T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.755232 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.755274 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.755283 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.755296 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.755305 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:37Z","lastTransitionTime":"2025-11-25T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.830243 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.830287 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:37 crc kubenswrapper[4482]: E1125 06:48:37.830343 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.830252 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:37 crc kubenswrapper[4482]: E1125 06:48:37.830436 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:37 crc kubenswrapper[4482]: E1125 06:48:37.830490 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.856534 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.856574 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.856583 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.856592 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.856600 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:37Z","lastTransitionTime":"2025-11-25T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.958146 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.958184 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.958192 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.958201 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.958208 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:37Z","lastTransitionTime":"2025-11-25T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.973161 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.973216 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.973224 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.973238 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.973249 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:37Z","lastTransitionTime":"2025-11-25T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: E1125 06:48:37.982838 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.985466 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.985495 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.985505 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.985515 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:37 crc kubenswrapper[4482]: I1125 06:48:37.985549 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:37Z","lastTransitionTime":"2025-11-25T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:37 crc kubenswrapper[4482]: E1125 06:48:37.998014 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:37Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.000508 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.000534 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.000543 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.000570 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.000579 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:38 crc kubenswrapper[4482]: E1125 06:48:38.008307 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.010270 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.010376 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.010531 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.010664 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.010811 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:38 crc kubenswrapper[4482]: E1125 06:48:38.018851 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.021540 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.021570 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.021578 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.021590 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.021599 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:38 crc kubenswrapper[4482]: E1125 06:48:38.029557 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:38Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:38 crc kubenswrapper[4482]: E1125 06:48:38.029783 4482 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.059676 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.059771 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.059849 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.059914 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.059977 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.161152 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.161202 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.161211 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.161222 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.161229 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.262348 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.262484 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.262562 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.262635 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.262688 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.364240 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.364261 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.364269 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.364278 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.364285 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.466228 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.466251 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.466258 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.466267 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.466274 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.567944 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.567967 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.567974 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.567983 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.567989 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.669544 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.669573 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.669581 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.669592 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.669600 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.771571 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.771596 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.771604 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.771614 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.771621 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.830361 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:38 crc kubenswrapper[4482]: E1125 06:48:38.830494 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.873219 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.873245 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.873254 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.873262 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.873269 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.975298 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.975322 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.975331 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.975342 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:38 crc kubenswrapper[4482]: I1125 06:48:38.975350 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:38Z","lastTransitionTime":"2025-11-25T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.077406 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.077438 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.077448 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.077458 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.077466 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:39Z","lastTransitionTime":"2025-11-25T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.179101 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.179129 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.179138 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.179150 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.179157 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:39Z","lastTransitionTime":"2025-11-25T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.280484 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.280516 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.280526 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.280537 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.280545 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:39Z","lastTransitionTime":"2025-11-25T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.382411 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.382452 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.382462 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.382477 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.382485 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:39Z","lastTransitionTime":"2025-11-25T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.483936 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.483964 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.483973 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.483983 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.483990 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:39Z","lastTransitionTime":"2025-11-25T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.586401 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.586427 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.586435 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.586446 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.586454 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:39Z","lastTransitionTime":"2025-11-25T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.688478 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.688506 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.688515 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.688549 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.688560 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:39Z","lastTransitionTime":"2025-11-25T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.790669 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.790703 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.790711 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.790722 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.790730 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:39Z","lastTransitionTime":"2025-11-25T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.830246 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.830250 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.830289 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:39 crc kubenswrapper[4482]: E1125 06:48:39.830364 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:39 crc kubenswrapper[4482]: E1125 06:48:39.830440 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:39 crc kubenswrapper[4482]: E1125 06:48:39.830522 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.891786 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.891808 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.891815 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.891832 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.891841 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:39Z","lastTransitionTime":"2025-11-25T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.993613 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.993641 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.993649 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.993658 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:39 crc kubenswrapper[4482]: I1125 06:48:39.993665 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:39Z","lastTransitionTime":"2025-11-25T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.095901 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.095957 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.095968 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.095985 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.095997 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:40Z","lastTransitionTime":"2025-11-25T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.198630 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.198665 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.198676 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.198687 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.198700 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:40Z","lastTransitionTime":"2025-11-25T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.300116 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.300141 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.300181 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.300191 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.300199 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:40Z","lastTransitionTime":"2025-11-25T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.401536 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.401555 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.401563 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.401571 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.401577 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:40Z","lastTransitionTime":"2025-11-25T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.503402 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.503426 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.503434 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.503444 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.503451 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:40Z","lastTransitionTime":"2025-11-25T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.605137 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.605192 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.605201 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.605209 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.605216 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:40Z","lastTransitionTime":"2025-11-25T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.706345 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.706439 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.706502 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.706671 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.706791 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:40Z","lastTransitionTime":"2025-11-25T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.808438 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.808548 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.808610 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.808672 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.808723 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:40Z","lastTransitionTime":"2025-11-25T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.830703 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:40 crc kubenswrapper[4482]: E1125 06:48:40.830773 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.910394 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.910413 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.910420 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.910428 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:40 crc kubenswrapper[4482]: I1125 06:48:40.910435 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:40Z","lastTransitionTime":"2025-11-25T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.012005 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.012023 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.012031 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.012039 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.012045 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:41Z","lastTransitionTime":"2025-11-25T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.112963 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.112978 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.112985 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.112993 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.112999 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:41Z","lastTransitionTime":"2025-11-25T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.214139 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.214325 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.214395 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.214456 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.214520 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:41Z","lastTransitionTime":"2025-11-25T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.316236 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.316290 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.316301 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.316310 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.316317 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:41Z","lastTransitionTime":"2025-11-25T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.417490 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.417891 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.417984 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.418060 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.418145 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:41Z","lastTransitionTime":"2025-11-25T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.519611 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.519654 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.519662 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.519671 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.519678 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:41Z","lastTransitionTime":"2025-11-25T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.620958 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.621112 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.621201 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.621274 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.621327 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:41Z","lastTransitionTime":"2025-11-25T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.722649 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.722674 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.722681 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.722691 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.722697 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:41Z","lastTransitionTime":"2025-11-25T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.824444 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.824464 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.824473 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.824481 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.824488 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:41Z","lastTransitionTime":"2025-11-25T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.830750 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.830786 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:41 crc kubenswrapper[4482]: E1125 06:48:41.830819 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.830956 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:41 crc kubenswrapper[4482]: E1125 06:48:41.830978 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:41 crc kubenswrapper[4482]: E1125 06:48:41.831022 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.926078 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.926101 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.926109 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.926119 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:41 crc kubenswrapper[4482]: I1125 06:48:41.926141 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:41Z","lastTransitionTime":"2025-11-25T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.027762 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.027783 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.027790 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.027798 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.027804 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:42Z","lastTransitionTime":"2025-11-25T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.129693 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.129711 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.129718 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.129727 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.129733 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:42Z","lastTransitionTime":"2025-11-25T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.231349 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.231376 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.231385 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.231395 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.231404 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:42Z","lastTransitionTime":"2025-11-25T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.332360 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.332381 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.332388 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.332397 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.332404 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:42Z","lastTransitionTime":"2025-11-25T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.433612 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.433637 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.433645 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.433654 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.433660 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:42Z","lastTransitionTime":"2025-11-25T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.535061 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.535200 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.535344 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.535415 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.535466 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:42Z","lastTransitionTime":"2025-11-25T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.637243 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.637359 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.637456 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.637520 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.637576 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:42Z","lastTransitionTime":"2025-11-25T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.739527 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.739554 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.739563 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.739574 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.739582 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:42Z","lastTransitionTime":"2025-11-25T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.806945 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:42 crc kubenswrapper[4482]: E1125 06:48:42.807031 4482 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:48:42 crc kubenswrapper[4482]: E1125 06:48:42.807082 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs podName:0a1c9846-2a7e-402e-985f-51a244241bd7 nodeName:}" failed. No retries permitted until 2025-11-25 06:49:46.807068297 +0000 UTC m=+161.295299566 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs") pod "network-metrics-daemon-2xhh4" (UID: "0a1c9846-2a7e-402e-985f-51a244241bd7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.830146 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:42 crc kubenswrapper[4482]: E1125 06:48:42.830450 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.840889 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.840907 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.840914 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.840923 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.840946 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:42Z","lastTransitionTime":"2025-11-25T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.942999 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.943044 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.943052 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.943061 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:42 crc kubenswrapper[4482]: I1125 06:48:42.943068 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:42Z","lastTransitionTime":"2025-11-25T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.044151 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.044201 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.044209 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.044218 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.044225 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:43Z","lastTransitionTime":"2025-11-25T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.145716 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.145736 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.145743 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.145752 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.145759 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:43Z","lastTransitionTime":"2025-11-25T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.246900 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.246999 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.247056 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.247118 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.247203 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:43Z","lastTransitionTime":"2025-11-25T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.348447 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.348466 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.348473 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.348481 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.348488 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:43Z","lastTransitionTime":"2025-11-25T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.449306 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.449585 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.449651 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.449717 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.449769 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:43Z","lastTransitionTime":"2025-11-25T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.550787 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.550807 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.550817 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.550827 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.550846 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:43Z","lastTransitionTime":"2025-11-25T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.651900 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.651925 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.651934 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.651945 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.651959 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:43Z","lastTransitionTime":"2025-11-25T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.753878 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.753912 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.753920 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.753928 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.753936 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:43Z","lastTransitionTime":"2025-11-25T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.830244 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.830263 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.830300 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:43 crc kubenswrapper[4482]: E1125 06:48:43.830378 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:43 crc kubenswrapper[4482]: E1125 06:48:43.830635 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:43 crc kubenswrapper[4482]: E1125 06:48:43.830699 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.855710 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.855739 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.855749 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.855757 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.855764 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:43Z","lastTransitionTime":"2025-11-25T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.957203 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.957226 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.957234 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.957247 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:43 crc kubenswrapper[4482]: I1125 06:48:43.957256 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:43Z","lastTransitionTime":"2025-11-25T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.058575 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.058599 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.058606 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.058615 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.058622 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:44Z","lastTransitionTime":"2025-11-25T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.159673 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.159695 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.159703 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.159712 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.159719 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:44Z","lastTransitionTime":"2025-11-25T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.260960 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.260981 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.260990 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.261002 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.261010 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:44Z","lastTransitionTime":"2025-11-25T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.362501 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.362532 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.362542 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.362554 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.362564 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:44Z","lastTransitionTime":"2025-11-25T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.463920 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.463946 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.463953 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.463962 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.463970 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:44Z","lastTransitionTime":"2025-11-25T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.565623 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.565667 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.565675 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.565684 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.565690 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:44Z","lastTransitionTime":"2025-11-25T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.667735 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.667758 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.667765 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.667774 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.667780 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:44Z","lastTransitionTime":"2025-11-25T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.769433 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.769461 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.769468 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.769476 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.769483 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:44Z","lastTransitionTime":"2025-11-25T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.830371 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:44 crc kubenswrapper[4482]: E1125 06:48:44.830541 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.830943 4482 scope.go:117] "RemoveContainer" containerID="2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab" Nov 25 06:48:44 crc kubenswrapper[4482]: E1125 06:48:44.831055 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.839103 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.871864 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.871889 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.871897 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.871908 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.871915 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:44Z","lastTransitionTime":"2025-11-25T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.975562 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.975597 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.975625 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.975640 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:44 crc kubenswrapper[4482]: I1125 06:48:44.975650 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:44Z","lastTransitionTime":"2025-11-25T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.077281 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.077306 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.077314 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.077323 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.077331 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:45Z","lastTransitionTime":"2025-11-25T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.178935 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.178961 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.178970 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.178980 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.178987 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:45Z","lastTransitionTime":"2025-11-25T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.281380 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.281401 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.281409 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.281422 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.281429 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:45Z","lastTransitionTime":"2025-11-25T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.382733 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.382757 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.382764 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.382772 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.382779 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:45Z","lastTransitionTime":"2025-11-25T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.483834 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.483890 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.483898 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.483906 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.483913 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:45Z","lastTransitionTime":"2025-11-25T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.585208 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.585227 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.585234 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.585243 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.585249 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:45Z","lastTransitionTime":"2025-11-25T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.686426 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.686445 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.686453 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.686461 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.686468 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:45Z","lastTransitionTime":"2025-11-25T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.787527 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.787553 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.787560 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.787569 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.787575 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:45Z","lastTransitionTime":"2025-11-25T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.829676 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.829689 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.829741 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:45 crc kubenswrapper[4482]: E1125 06:48:45.829742 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:45 crc kubenswrapper[4482]: E1125 06:48:45.829994 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:45 crc kubenswrapper[4482]: E1125 06:48:45.830072 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.837645 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a1c9846-2a7e-402e-985f-51a244241bd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdfzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2xhh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.843830 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c6d24f-5701-4ec0-a0fe-c04ff96666d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afabe0c26cf96847b662a1236a8d5f22205769282690735780ef24580c394cb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac50938bda83c23f2391068a14a8c5f84554f1181814baf540b75713d7aa7493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac50938bda83c23f2391068a14a8c5f84554f1181814baf540b75713d7aa7493\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.851225 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f261c1c4171ec6d701d1a792e6b7bcc31abdb4687b82dec1338236a355c18ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c343303a3a88e772b0e195824163df2c73d41f015fff31d0339f070ed187a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.858277 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eafc42579cbca47af6a00f122ed93d6dab1316e9d05ac31488cc72078dd58e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.864818 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46a7d6ef-c931-4f15-893b-c9436d6de1f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf938622c660fd52b00def63191ed1804a17ba4cd31b94a0ebf06c3882b5234a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vhnvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4qzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.876602 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbf029b-8319-4aeb-90a4-351c3936e7dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8709f43d6d41a907d6ea4c08be2005972df9da67d65eedab232c0d86997e7f6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba1700c0555a48399a3600c1af86b8b583eff231a7a821d1b56415ed921c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a048ed10ebdb87ca57b7db08bf15bf22a6f89bb2e4a9a0c65862cb949aaf12c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dc5064047c99e4e68086e62e10665a650905f8f6e5ef6e6c829802ecd2ebfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e73801accce2339ba7e2ce18619fed860176d1385fda2ee9faccdb5bb1d1b7df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8553cf63d28a7716a6e99bb815f823963e3c270a832cab11f708e49df7fe603b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8553cf63d28a7716a6e99bb815f823963e3c270a832cab11f708e49df7fe603b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f69a24f1c1cabfe32d3ee36250ff2af116c1aebe35d1f9883454cbaa66918f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f69a24f1c1cabfe32d3ee36250ff2af116c1aebe35d1f9883454cbaa66918f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d5aad9b71aaec08ec8ad8b9b321d52be182b58f5f8de85c1c6b87857f2d7af0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5aad9b71aaec08ec8ad8b9b321d52be182b58f5f8de85c1c6b87857f2d7af0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.885004 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6df3d28-c8f6-4460-b529-d5d1327f8e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 06:47:23.115822 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 06:47:23.115991 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 06:47:23.119324 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2902759948/tls.crt::/tmp/serving-cert-2902759948/tls.key\\\\\\\"\\\\nI1125 06:47:23.495861 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 06:47:23.498186 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 06:47:23.498208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 06:47:23.498231 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 06:47:23.498236 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 06:47:23.502719 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 06:47:23.502741 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 06:47:23.502749 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 06:47:23.502752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 06:47:23.502755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 06:47:23.502758 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 06:47:23.502919 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 06:47:23.504302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.889182 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.889210 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.889219 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.889232 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.889240 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:45Z","lastTransitionTime":"2025-11-25T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.891832 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xk9c4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"606a3794-ab1c-469d-b489-83811b456769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a513f6ad77ff2829a04d242e1ce2d843c53a195b06f6eab111cab7258916d5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf2vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xk9c4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.902870 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.914273 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:15Z\\\",\\\"message\\\":\\\"rol-plane-749d76644c-qpxjn\\\\nI1125 06:48:15.469567 6422 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn in node crc\\\\nI1125 06:48:15.469572 6422 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn after 0 failed attempt(s)\\\\nI1125 06:48:15.469581 6422 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn\\\\nF1125 06:48:15.469587 6422 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:15Z is after 2025-08-24T17:21:41Z]\\\\nI1125 06:48:15.4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:48:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c58dr_openshift-ovn-kubernetes(2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c58dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.920796 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9407ebd6-89eb-4522-81c8-b224bf948ba4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874ef3fb4e966ff8ff51017c11f1e7e1ad6da809715580fbf43373cf1bcebcf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2cf5e0df7b4e4173b212d0eab8435b21ce7aab304b3e3ce0b4b0a64fe0ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2n2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qpxjn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.927895 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.935502 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d109f1d2c44d10acac4347a7d2d55368497a335c4e0d9ca079c487860e873e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.942791 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.950309 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b5qtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T06:48:11Z\\\",\\\"message\\\":\\\"2025-11-25T06:47:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea\\\\n2025-11-25T06:47:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_254d23c9-76e9-4501-a01d-33e292aa08ea to /host/opt/cni/bin/\\\\n2025-11-25T06:47:26Z [verbose] multus-daemon started\\\\n2025-11-25T06:47:26Z [verbose] Readiness Indicator file check\\\\n2025-11-25T06:48:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:48:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nsxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b5qtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.957713 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1deb5c8-962e-449c-aa23-4f6a457e6f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d50f4b04ff1d0f3b372fc317ba932c08916a05ddc54b78afc038935700bcbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ebc1aab42c59cc002f840ef448cb84ab2b688e313b3061a88abdc2d5bd32e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5ff9bf8bd04afaddffcfeafa8bd2d41cd4053e2c722e1f0f613e2382f3bea9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.964359 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d314f82-e6a3-44d6-b59b-b68552730866\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d5f730d9fc2cf67bca05c6b7ca8035f813d91a8ac6b069f70457b5a63e9d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://645d4b2d1e65d0d5b0e29914ac6e7ac26a91d65ad5ea42a309e983cf633e9fb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c736aa6a7231244785b8651eda784a6aa13f745d1e95a7d4963458ebe6647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://447f658ec43ecb599e160ae97123f2da6ecb71cfce40975ebf566e82cc475c28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.974314 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b267b2b-7642-40e7-985d-4f5d8cff541c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ce7699d875fc587d2c460c8004b74f3089df164304ba979b7e90840d7b5f5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22992d9378e2da61b85a648b889b2f6faa1c755d7df3834280ffcd5ab13e2ef1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d48efee3ce1e1a3c8a410ce3cf522cdbd65724d2c6e85976d34b4c2b69b27b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982383d42aba3a1547018cf0dca58c38e79671b3d0fb4806c02bd0c3ae711311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f4eeadf96b11243812bb91f2a536f75d0816cf2cb5472976736a41231bd67a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add57b45aeacf75df1dea034a52119ce36f77ed1ded38865c34c26a368381df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6057c4223d96666f904382b26fec82177476af782b8d70e2d00d0a1e24e082d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T06:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T06:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntjhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dvpcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.981075 4482 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m5qcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"371864cf-3771-4348-9e81-929eee585f98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T06:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ea35bb5efd6f892ebaa8aabb5bde5d9e0f321c9275e6d44fd3588ad4822355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T06:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djzlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T06:47:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m5qcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:45Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.991241 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.991264 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.991271 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.991284 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:45 crc kubenswrapper[4482]: I1125 06:48:45.991292 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:45Z","lastTransitionTime":"2025-11-25T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.093355 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.093381 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.093389 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.093398 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.093406 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:46Z","lastTransitionTime":"2025-11-25T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.194660 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.194688 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.194696 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.194706 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.194715 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:46Z","lastTransitionTime":"2025-11-25T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.296057 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.296100 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.296110 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.296121 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.296130 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:46Z","lastTransitionTime":"2025-11-25T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.398287 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.398326 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.398336 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.398350 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.398360 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:46Z","lastTransitionTime":"2025-11-25T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.499992 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.500018 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.500026 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.500040 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.500049 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:46Z","lastTransitionTime":"2025-11-25T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.602049 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.602082 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.602091 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.602105 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.602113 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:46Z","lastTransitionTime":"2025-11-25T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.704227 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.704267 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.704276 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.704289 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.704298 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:46Z","lastTransitionTime":"2025-11-25T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.806347 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.806397 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.806405 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.806416 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.806424 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:46Z","lastTransitionTime":"2025-11-25T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.830588 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:46 crc kubenswrapper[4482]: E1125 06:48:46.830681 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.908272 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.908384 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.908448 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.908520 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:46 crc kubenswrapper[4482]: I1125 06:48:46.908584 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:46Z","lastTransitionTime":"2025-11-25T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.010292 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.010341 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.010351 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.010361 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.010368 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:47Z","lastTransitionTime":"2025-11-25T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.112224 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.112264 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.112275 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.112290 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.112301 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:47Z","lastTransitionTime":"2025-11-25T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.213579 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.213629 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.213637 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.213651 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.213660 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:47Z","lastTransitionTime":"2025-11-25T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.314568 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.314594 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.314601 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.314611 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.314619 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:47Z","lastTransitionTime":"2025-11-25T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.415707 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.415732 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.415739 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.415748 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.415755 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:47Z","lastTransitionTime":"2025-11-25T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.517103 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.517154 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.517182 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.517195 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.517232 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:47Z","lastTransitionTime":"2025-11-25T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.619085 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.619120 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.619129 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.619142 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.619151 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:47Z","lastTransitionTime":"2025-11-25T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.720206 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.720237 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.720246 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.720255 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.720262 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:47Z","lastTransitionTime":"2025-11-25T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.822136 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.822275 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.822339 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.822404 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.822469 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:47Z","lastTransitionTime":"2025-11-25T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.829867 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.829921 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.830018 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:47 crc kubenswrapper[4482]: E1125 06:48:47.830063 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:47 crc kubenswrapper[4482]: E1125 06:48:47.830147 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:47 crc kubenswrapper[4482]: E1125 06:48:47.830114 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.923993 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.924022 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.924031 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.924040 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:47 crc kubenswrapper[4482]: I1125 06:48:47.924048 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:47Z","lastTransitionTime":"2025-11-25T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.026004 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.026028 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.026035 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.026045 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.026052 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.127397 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.127418 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.127425 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.127433 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.127446 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.130511 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.130565 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.130574 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.130587 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.130595 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: E1125 06:48:48.138975 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:48Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.140905 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.140931 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.140939 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.140948 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.140955 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: E1125 06:48:48.148298 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:48Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.150103 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.150120 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.150128 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.150137 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.150144 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: E1125 06:48:48.157152 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:48Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.159081 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.159112 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.159122 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.159134 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.159141 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: E1125 06:48:48.167347 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:48Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.170105 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.170207 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.170218 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.170228 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.170239 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: E1125 06:48:48.180065 4482 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T06:48:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bc8fd5c-2a65-48a1-a2b0-04fc92f0c611\\\",\\\"systemUUID\\\":\\\"dc9d32b7-fef4-46db-bcb5-f2930afc514b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T06:48:48Z is after 2025-08-24T17:21:41Z" Nov 25 06:48:48 crc kubenswrapper[4482]: E1125 06:48:48.180346 4482 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.228965 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.229055 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.229115 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.229194 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.229271 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.330937 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.330967 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.330974 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.330984 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.330991 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.432526 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.432672 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.432744 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.432810 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.432878 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.534359 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.534402 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.534415 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.534432 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.534444 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.635909 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.635926 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.635935 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.635945 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.635952 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.737265 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.737320 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.737331 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.737344 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.737352 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.830215 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:48 crc kubenswrapper[4482]: E1125 06:48:48.830315 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.839224 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.839245 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.839253 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.839263 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.839270 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.941085 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.941130 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.941143 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.941199 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:48 crc kubenswrapper[4482]: I1125 06:48:48.941213 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:48Z","lastTransitionTime":"2025-11-25T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.042687 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.042708 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.042716 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.042724 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.042732 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:49Z","lastTransitionTime":"2025-11-25T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.144485 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.144516 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.144526 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.144563 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.144572 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:49Z","lastTransitionTime":"2025-11-25T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.245907 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.245932 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.245940 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.245949 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.245957 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:49Z","lastTransitionTime":"2025-11-25T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.347941 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.347972 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.347980 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.347993 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.348003 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:49Z","lastTransitionTime":"2025-11-25T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.450059 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.450095 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.450103 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.450115 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.450124 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:49Z","lastTransitionTime":"2025-11-25T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.551407 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.551427 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.551436 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.551447 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.551454 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:49Z","lastTransitionTime":"2025-11-25T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.653325 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.653354 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.653363 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.653373 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.653382 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:49Z","lastTransitionTime":"2025-11-25T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.755196 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.755235 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.755244 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.755259 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.755267 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:49Z","lastTransitionTime":"2025-11-25T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.829816 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.829899 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.829956 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:49 crc kubenswrapper[4482]: E1125 06:48:49.829957 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:49 crc kubenswrapper[4482]: E1125 06:48:49.830041 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:49 crc kubenswrapper[4482]: E1125 06:48:49.830139 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.857252 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.857279 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.857287 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.857296 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.857304 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:49Z","lastTransitionTime":"2025-11-25T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.958757 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.958784 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.958794 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.958806 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:49 crc kubenswrapper[4482]: I1125 06:48:49.958815 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:49Z","lastTransitionTime":"2025-11-25T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.060633 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.060675 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.060685 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.060696 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.060704 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:50Z","lastTransitionTime":"2025-11-25T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.162420 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.162473 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.162484 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.162496 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.162505 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:50Z","lastTransitionTime":"2025-11-25T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.264390 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.264433 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.264442 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.264454 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.264462 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:50Z","lastTransitionTime":"2025-11-25T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.366400 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.366433 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.366442 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.366455 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.366464 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:50Z","lastTransitionTime":"2025-11-25T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.468100 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.468131 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.468140 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.468154 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.468162 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:50Z","lastTransitionTime":"2025-11-25T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.569617 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.569642 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.569652 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.569663 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.569670 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:50Z","lastTransitionTime":"2025-11-25T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.671099 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.671123 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.671131 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.671141 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.671148 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:50Z","lastTransitionTime":"2025-11-25T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.772367 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.772392 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.772401 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.772410 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.772417 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:50Z","lastTransitionTime":"2025-11-25T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.830034 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:50 crc kubenswrapper[4482]: E1125 06:48:50.830130 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.874491 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.874513 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.874522 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.874531 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.874537 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:50Z","lastTransitionTime":"2025-11-25T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.976378 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.976400 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.976407 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.976415 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:50 crc kubenswrapper[4482]: I1125 06:48:50.976422 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:50Z","lastTransitionTime":"2025-11-25T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.078343 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.078391 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.078401 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.078413 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.078421 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:51Z","lastTransitionTime":"2025-11-25T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.180099 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.180141 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.180149 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.180176 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.180185 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:51Z","lastTransitionTime":"2025-11-25T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.281729 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.281750 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.281757 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.281767 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.281774 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:51Z","lastTransitionTime":"2025-11-25T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.383241 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.383272 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.383280 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.383293 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.383302 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:51Z","lastTransitionTime":"2025-11-25T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.484835 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.484889 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.484897 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.484907 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.484914 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:51Z","lastTransitionTime":"2025-11-25T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.586610 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.586636 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.586647 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.586657 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.586665 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:51Z","lastTransitionTime":"2025-11-25T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.688032 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.688059 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.688067 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.688095 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.688104 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:51Z","lastTransitionTime":"2025-11-25T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.789649 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.789673 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.789682 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.789693 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.789700 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:51Z","lastTransitionTime":"2025-11-25T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.830223 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.830252 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.830256 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:51 crc kubenswrapper[4482]: E1125 06:48:51.830303 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:51 crc kubenswrapper[4482]: E1125 06:48:51.830424 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:51 crc kubenswrapper[4482]: E1125 06:48:51.830460 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.891245 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.891281 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.891289 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.891298 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.891305 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:51Z","lastTransitionTime":"2025-11-25T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.993137 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.993200 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.993210 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.993219 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:51 crc kubenswrapper[4482]: I1125 06:48:51.993226 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:51Z","lastTransitionTime":"2025-11-25T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.094704 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.094726 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.094734 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.094743 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.094749 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:52Z","lastTransitionTime":"2025-11-25T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.196296 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.196362 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.196376 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.196390 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.196399 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:52Z","lastTransitionTime":"2025-11-25T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.298280 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.298329 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.298340 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.298353 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.298361 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:52Z","lastTransitionTime":"2025-11-25T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.399570 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.399593 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.399600 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.399610 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.399617 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:52Z","lastTransitionTime":"2025-11-25T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.501450 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.501469 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.501476 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.501486 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.501493 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:52Z","lastTransitionTime":"2025-11-25T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.603334 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.603361 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.603369 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.603382 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.603391 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:52Z","lastTransitionTime":"2025-11-25T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.705007 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.705029 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.705037 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.705047 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.705055 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:52Z","lastTransitionTime":"2025-11-25T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.806991 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.807080 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.807098 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.807109 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.807117 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:52Z","lastTransitionTime":"2025-11-25T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.830476 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:52 crc kubenswrapper[4482]: E1125 06:48:52.830748 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.908587 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.908626 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.908637 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.908652 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:52 crc kubenswrapper[4482]: I1125 06:48:52.908661 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:52Z","lastTransitionTime":"2025-11-25T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.010271 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.010299 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.010309 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.010319 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.010325 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:53Z","lastTransitionTime":"2025-11-25T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.111906 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.112046 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.112111 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.112198 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.112274 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:53Z","lastTransitionTime":"2025-11-25T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.213559 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.213593 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.213602 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.213615 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.213625 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:53Z","lastTransitionTime":"2025-11-25T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.315605 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.315633 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.315642 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.315654 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.315662 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:53Z","lastTransitionTime":"2025-11-25T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.418202 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.418248 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.418258 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.418270 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.418278 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:53Z","lastTransitionTime":"2025-11-25T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.519854 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.519894 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.519901 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.519913 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.519921 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:53Z","lastTransitionTime":"2025-11-25T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.621942 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.621969 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.621977 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.621989 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.621997 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:53Z","lastTransitionTime":"2025-11-25T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.723382 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.723415 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.723446 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.723457 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.723465 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:53Z","lastTransitionTime":"2025-11-25T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.825570 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.825594 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.825602 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.825611 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.825619 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:53Z","lastTransitionTime":"2025-11-25T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.829750 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.829770 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:53 crc kubenswrapper[4482]: E1125 06:48:53.829839 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.829752 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:53 crc kubenswrapper[4482]: E1125 06:48:53.829948 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:53 crc kubenswrapper[4482]: E1125 06:48:53.829988 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.927131 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.927156 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.927165 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.927194 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:53 crc kubenswrapper[4482]: I1125 06:48:53.927204 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:53Z","lastTransitionTime":"2025-11-25T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.028321 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.028404 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.028489 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.028546 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.028619 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:54Z","lastTransitionTime":"2025-11-25T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.130287 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.130316 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.130325 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.130339 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.130348 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:54Z","lastTransitionTime":"2025-11-25T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.231858 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.231897 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.231905 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.231915 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.231923 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:54Z","lastTransitionTime":"2025-11-25T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.333330 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.333360 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.333369 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.333381 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.333390 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:54Z","lastTransitionTime":"2025-11-25T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.435312 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.435343 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.435353 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.435367 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.435376 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:54Z","lastTransitionTime":"2025-11-25T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.536870 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.536905 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.536914 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.536941 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.536951 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:54Z","lastTransitionTime":"2025-11-25T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.638831 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.638871 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.638880 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.638893 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.638901 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:54Z","lastTransitionTime":"2025-11-25T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.741060 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.741087 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.741096 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.741109 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.741118 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:54Z","lastTransitionTime":"2025-11-25T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.830805 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:54 crc kubenswrapper[4482]: E1125 06:48:54.830898 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.842611 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.842631 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.842640 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.842649 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.842657 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:54Z","lastTransitionTime":"2025-11-25T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.944213 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.944248 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.944256 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.944269 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:54 crc kubenswrapper[4482]: I1125 06:48:54.944278 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:54Z","lastTransitionTime":"2025-11-25T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.046221 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.046257 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.046268 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.046282 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.046290 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:55Z","lastTransitionTime":"2025-11-25T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.148184 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.148219 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.148226 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.148236 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.148243 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:55Z","lastTransitionTime":"2025-11-25T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.249522 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.249560 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.249569 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.249581 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.249590 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:55Z","lastTransitionTime":"2025-11-25T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.351745 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.351775 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.351784 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.351796 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.351804 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:55Z","lastTransitionTime":"2025-11-25T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.453796 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.453821 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.453828 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.453838 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.453847 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:55Z","lastTransitionTime":"2025-11-25T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.555971 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.555991 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.555999 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.556008 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.556014 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:55Z","lastTransitionTime":"2025-11-25T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.657638 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.657664 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.657672 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.657681 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.657688 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:55Z","lastTransitionTime":"2025-11-25T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.760151 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.760240 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.760256 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.760274 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.760287 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:55Z","lastTransitionTime":"2025-11-25T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.829760 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.829785 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.829855 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:55 crc kubenswrapper[4482]: E1125 06:48:55.829976 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:55 crc kubenswrapper[4482]: E1125 06:48:55.830110 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:55 crc kubenswrapper[4482]: E1125 06:48:55.830242 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.862094 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.862133 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.862144 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.862159 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.862189 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:55Z","lastTransitionTime":"2025-11-25T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.862727 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qpxjn" podStartSLOduration=90.862714748 podStartE2EDuration="1m30.862714748s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:48:55.861086077 +0000 UTC m=+110.349317336" watchObservedRunningTime="2025-11-25 06:48:55.862714748 +0000 UTC m=+110.350946008" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.909112 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-b5qtx" podStartSLOduration=90.909097883 podStartE2EDuration="1m30.909097883s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:48:55.908594684 +0000 UTC m=+110.396825943" watchObservedRunningTime="2025-11-25 06:48:55.909097883 +0000 UTC m=+110.397329143" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.919739 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=92.919718696 podStartE2EDuration="1m32.919718696s" podCreationTimestamp="2025-11-25 06:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:48:55.919528206 +0000 UTC m=+110.407759465" watchObservedRunningTime="2025-11-25 06:48:55.919718696 +0000 UTC m=+110.407949954" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.939462 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.939448469 podStartE2EDuration="57.939448469s" podCreationTimestamp="2025-11-25 06:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:48:55.927518026 +0000 UTC m=+110.415749285" watchObservedRunningTime="2025-11-25 06:48:55.939448469 +0000 UTC m=+110.427679727" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.939661 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dvpcl" podStartSLOduration=90.939656681 podStartE2EDuration="1m30.939656681s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:48:55.939553877 +0000 UTC m=+110.427785136" watchObservedRunningTime="2025-11-25 06:48:55.939656681 +0000 UTC m=+110.427887940" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.946429 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-m5qcx" podStartSLOduration=90.946417142 podStartE2EDuration="1m30.946417142s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:48:55.946210703 +0000 UTC m=+110.434441962" watchObservedRunningTime="2025-11-25 06:48:55.946417142 +0000 UTC m=+110.434648401" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.960455 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.960440142 podStartE2EDuration="31.960440142s" podCreationTimestamp="2025-11-25 06:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:48:55.960432477 +0000 UTC m=+110.448663736" watchObservedRunningTime="2025-11-25 06:48:55.960440142 +0000 UTC m=+110.448671400" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.964000 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.964032 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.964041 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.964055 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:55 crc kubenswrapper[4482]: I1125 06:48:55.964064 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:55Z","lastTransitionTime":"2025-11-25T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.039309 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=12.039293179 podStartE2EDuration="12.039293179s" podCreationTimestamp="2025-11-25 06:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:48:56.039253975 +0000 UTC m=+110.527485234" watchObservedRunningTime="2025-11-25 06:48:56.039293179 +0000 UTC m=+110.527524438" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.039548 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podStartSLOduration=91.03954341 podStartE2EDuration="1m31.03954341s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:48:55.995264515 +0000 UTC m=+110.483495774" watchObservedRunningTime="2025-11-25 06:48:56.03954341 +0000 UTC m=+110.527774669" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.060499 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=93.060486943 podStartE2EDuration="1m33.060486943s" podCreationTimestamp="2025-11-25 06:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:48:56.052758165 +0000 UTC m=+110.540989424" watchObservedRunningTime="2025-11-25 06:48:56.060486943 +0000 UTC m=+110.548718202" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.065915 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.065951 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.065973 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.065989 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.065998 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:56Z","lastTransitionTime":"2025-11-25T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.070311 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xk9c4" podStartSLOduration=93.070300061 podStartE2EDuration="1m33.070300061s" podCreationTimestamp="2025-11-25 06:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:48:56.061134725 +0000 UTC m=+110.549365984" watchObservedRunningTime="2025-11-25 06:48:56.070300061 +0000 UTC m=+110.558531321" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.167662 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.167700 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.167711 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.167725 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.167734 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:56Z","lastTransitionTime":"2025-11-25T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.269543 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.269569 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.269576 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.269588 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.269596 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:56Z","lastTransitionTime":"2025-11-25T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.371894 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.371932 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.371940 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.371954 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.371962 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:56Z","lastTransitionTime":"2025-11-25T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.473508 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.473537 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.473548 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.473560 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.473570 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:56Z","lastTransitionTime":"2025-11-25T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.575750 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.575806 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.575815 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.575828 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.575837 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:56Z","lastTransitionTime":"2025-11-25T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.677814 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.677848 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.677856 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.677881 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.677890 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:56Z","lastTransitionTime":"2025-11-25T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.779296 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.779332 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.779342 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.779353 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.779361 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:56Z","lastTransitionTime":"2025-11-25T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.830598 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:56 crc kubenswrapper[4482]: E1125 06:48:56.830691 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.881495 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.881573 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.881584 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.881597 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.881605 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:56Z","lastTransitionTime":"2025-11-25T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.983628 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.983658 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.983666 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.983679 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:56 crc kubenswrapper[4482]: I1125 06:48:56.983687 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:56Z","lastTransitionTime":"2025-11-25T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.085500 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.085529 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.085537 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.085547 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.085555 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:57Z","lastTransitionTime":"2025-11-25T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.187790 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.187824 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.187833 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.187845 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.187855 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:57Z","lastTransitionTime":"2025-11-25T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.289067 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.289095 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.289106 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.289117 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.289125 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:57Z","lastTransitionTime":"2025-11-25T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.390918 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.390948 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.390956 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.390967 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.390974 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:57Z","lastTransitionTime":"2025-11-25T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.492856 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.492921 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.492930 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.492943 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.492953 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:57Z","lastTransitionTime":"2025-11-25T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.594533 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.594578 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.594588 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.594599 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.594607 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:57Z","lastTransitionTime":"2025-11-25T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.696802 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.696849 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.696859 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.696883 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.696902 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:57Z","lastTransitionTime":"2025-11-25T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.798659 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.798686 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.798695 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.798708 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.798717 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:57Z","lastTransitionTime":"2025-11-25T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.830528 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.830547 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:57 crc kubenswrapper[4482]: E1125 06:48:57.830609 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.830648 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:57 crc kubenswrapper[4482]: E1125 06:48:57.830722 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:48:57 crc kubenswrapper[4482]: E1125 06:48:57.830823 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.900121 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.900150 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.900157 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.900183 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:57 crc kubenswrapper[4482]: I1125 06:48:57.900191 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:57Z","lastTransitionTime":"2025-11-25T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.001725 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.001747 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.001755 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.001764 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.001772 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:58Z","lastTransitionTime":"2025-11-25T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.103577 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.103645 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.103656 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.103668 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.103676 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:58Z","lastTransitionTime":"2025-11-25T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.205309 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.205340 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.205355 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.205367 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.205375 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:58Z","lastTransitionTime":"2025-11-25T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.225468 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b5qtx_2384eec7-0cd1-4bc5-9bc7-b5bb42607c37/kube-multus/1.log" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.225811 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b5qtx_2384eec7-0cd1-4bc5-9bc7-b5bb42607c37/kube-multus/0.log" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.225844 4482 generic.go:334] "Generic (PLEG): container finished" podID="2384eec7-0cd1-4bc5-9bc7-b5bb42607c37" containerID="898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16" exitCode=1 Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.225864 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b5qtx" event={"ID":"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37","Type":"ContainerDied","Data":"898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16"} Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.225896 4482 scope.go:117] "RemoveContainer" containerID="c93b6aacd6a3a66d9b7fd532660bf0619d361a880c43167e622fc609ec5954e7" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.226603 4482 scope.go:117] "RemoveContainer" containerID="898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16" Nov 25 06:48:58 crc kubenswrapper[4482]: E1125 06:48:58.226780 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-b5qtx_openshift-multus(2384eec7-0cd1-4bc5-9bc7-b5bb42607c37)\"" pod="openshift-multus/multus-b5qtx" podUID="2384eec7-0cd1-4bc5-9bc7-b5bb42607c37" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.306683 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.306728 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.306738 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.306754 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.306764 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:58Z","lastTransitionTime":"2025-11-25T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.408704 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.408736 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.408745 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.408757 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.408765 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:58Z","lastTransitionTime":"2025-11-25T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.472107 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.472151 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.472160 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.472189 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.472199 4482 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T06:48:58Z","lastTransitionTime":"2025-11-25T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.505275 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v"] Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.505608 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.506815 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.508045 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.508267 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.509806 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.626679 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b782112e-20af-42da-8cb2-4eea974ed63d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.626718 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b782112e-20af-42da-8cb2-4eea974ed63d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.626738 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b782112e-20af-42da-8cb2-4eea974ed63d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.626756 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b782112e-20af-42da-8cb2-4eea974ed63d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.626782 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b782112e-20af-42da-8cb2-4eea974ed63d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.727619 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b782112e-20af-42da-8cb2-4eea974ed63d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.727671 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b782112e-20af-42da-8cb2-4eea974ed63d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.727695 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b782112e-20af-42da-8cb2-4eea974ed63d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.727713 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b782112e-20af-42da-8cb2-4eea974ed63d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.727731 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b782112e-20af-42da-8cb2-4eea974ed63d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.727726 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b782112e-20af-42da-8cb2-4eea974ed63d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.727755 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b782112e-20af-42da-8cb2-4eea974ed63d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.728608 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b782112e-20af-42da-8cb2-4eea974ed63d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.731640 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b782112e-20af-42da-8cb2-4eea974ed63d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.740610 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b782112e-20af-42da-8cb2-4eea974ed63d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xxq6v\" (UID: \"b782112e-20af-42da-8cb2-4eea974ed63d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.814954 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" Nov 25 06:48:58 crc kubenswrapper[4482]: I1125 06:48:58.830132 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:48:58 crc kubenswrapper[4482]: E1125 06:48:58.830234 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:48:59 crc kubenswrapper[4482]: I1125 06:48:59.228624 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b5qtx_2384eec7-0cd1-4bc5-9bc7-b5bb42607c37/kube-multus/1.log" Nov 25 06:48:59 crc kubenswrapper[4482]: I1125 06:48:59.229751 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" event={"ID":"b782112e-20af-42da-8cb2-4eea974ed63d","Type":"ContainerStarted","Data":"4c889def9ee9351a06361ac2d72102b1722c681b541718979f641e0ba0702b05"} Nov 25 06:48:59 crc kubenswrapper[4482]: I1125 06:48:59.229792 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" event={"ID":"b782112e-20af-42da-8cb2-4eea974ed63d","Type":"ContainerStarted","Data":"fe4bfd236045be9082a6538144275a7361d3c2b3bdc11f5cc37e9853003fed6b"} Nov 25 06:48:59 crc kubenswrapper[4482]: I1125 06:48:59.238729 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxq6v" podStartSLOduration=94.238713977 podStartE2EDuration="1m34.238713977s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:48:59.237709983 +0000 UTC m=+113.725941242" watchObservedRunningTime="2025-11-25 06:48:59.238713977 +0000 UTC m=+113.726945236" Nov 25 06:48:59 crc kubenswrapper[4482]: I1125 06:48:59.830284 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:48:59 crc kubenswrapper[4482]: I1125 06:48:59.830542 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:48:59 crc kubenswrapper[4482]: I1125 06:48:59.830597 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:48:59 crc kubenswrapper[4482]: E1125 06:48:59.830791 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:48:59 crc kubenswrapper[4482]: I1125 06:48:59.830833 4482 scope.go:117] "RemoveContainer" containerID="2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab" Nov 25 06:48:59 crc kubenswrapper[4482]: E1125 06:48:59.830842 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:48:59 crc kubenswrapper[4482]: E1125 06:48:59.830695 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:49:00 crc kubenswrapper[4482]: I1125 06:49:00.233857 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/3.log" Nov 25 06:49:00 crc kubenswrapper[4482]: I1125 06:49:00.239513 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerStarted","Data":"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d"} Nov 25 06:49:00 crc kubenswrapper[4482]: I1125 06:49:00.239870 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:49:00 crc kubenswrapper[4482]: I1125 06:49:00.258580 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podStartSLOduration=95.258567998 podStartE2EDuration="1m35.258567998s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:00.258234468 +0000 UTC m=+114.746465737" watchObservedRunningTime="2025-11-25 06:49:00.258567998 +0000 UTC m=+114.746799257" Nov 25 06:49:00 crc kubenswrapper[4482]: I1125 06:49:00.440985 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2xhh4"] Nov 25 06:49:00 crc kubenswrapper[4482]: I1125 06:49:00.441084 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:00 crc kubenswrapper[4482]: E1125 06:49:00.441160 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:49:01 crc kubenswrapper[4482]: I1125 06:49:01.829971 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:01 crc kubenswrapper[4482]: I1125 06:49:01.830022 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:01 crc kubenswrapper[4482]: E1125 06:49:01.830096 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:49:01 crc kubenswrapper[4482]: I1125 06:49:01.829990 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:49:01 crc kubenswrapper[4482]: E1125 06:49:01.830229 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:49:01 crc kubenswrapper[4482]: I1125 06:49:01.830264 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:01 crc kubenswrapper[4482]: E1125 06:49:01.830303 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:49:01 crc kubenswrapper[4482]: E1125 06:49:01.830402 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:49:03 crc kubenswrapper[4482]: I1125 06:49:03.830478 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:03 crc kubenswrapper[4482]: I1125 06:49:03.830515 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:03 crc kubenswrapper[4482]: E1125 06:49:03.830578 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:49:03 crc kubenswrapper[4482]: I1125 06:49:03.830593 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:03 crc kubenswrapper[4482]: E1125 06:49:03.830698 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:49:03 crc kubenswrapper[4482]: E1125 06:49:03.830896 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:49:03 crc kubenswrapper[4482]: I1125 06:49:03.830994 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:49:03 crc kubenswrapper[4482]: E1125 06:49:03.831044 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:49:05 crc kubenswrapper[4482]: I1125 06:49:05.829938 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:05 crc kubenswrapper[4482]: I1125 06:49:05.829965 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:05 crc kubenswrapper[4482]: I1125 06:49:05.829988 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:05 crc kubenswrapper[4482]: I1125 06:49:05.830918 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:49:05 crc kubenswrapper[4482]: E1125 06:49:05.830969 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:49:05 crc kubenswrapper[4482]: E1125 06:49:05.831019 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:49:05 crc kubenswrapper[4482]: E1125 06:49:05.831055 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:49:05 crc kubenswrapper[4482]: E1125 06:49:05.830877 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:49:05 crc kubenswrapper[4482]: E1125 06:49:05.850780 4482 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 25 06:49:05 crc kubenswrapper[4482]: E1125 06:49:05.894520 4482 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 06:49:07 crc kubenswrapper[4482]: I1125 06:49:07.830308 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:07 crc kubenswrapper[4482]: E1125 06:49:07.830963 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:49:07 crc kubenswrapper[4482]: I1125 06:49:07.830375 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:49:07 crc kubenswrapper[4482]: E1125 06:49:07.831149 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:49:07 crc kubenswrapper[4482]: I1125 06:49:07.830349 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:07 crc kubenswrapper[4482]: E1125 06:49:07.831353 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:49:07 crc kubenswrapper[4482]: I1125 06:49:07.830419 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:07 crc kubenswrapper[4482]: E1125 06:49:07.831518 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:49:09 crc kubenswrapper[4482]: I1125 06:49:09.830784 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:09 crc kubenswrapper[4482]: E1125 06:49:09.830894 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:49:09 crc kubenswrapper[4482]: I1125 06:49:09.831090 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:49:09 crc kubenswrapper[4482]: E1125 06:49:09.831138 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:49:09 crc kubenswrapper[4482]: I1125 06:49:09.831279 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:09 crc kubenswrapper[4482]: E1125 06:49:09.831333 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:49:09 crc kubenswrapper[4482]: I1125 06:49:09.831398 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:09 crc kubenswrapper[4482]: E1125 06:49:09.831446 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:49:10 crc kubenswrapper[4482]: I1125 06:49:10.830250 4482 scope.go:117] "RemoveContainer" containerID="898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16" Nov 25 06:49:10 crc kubenswrapper[4482]: E1125 06:49:10.895651 4482 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 06:49:11 crc kubenswrapper[4482]: I1125 06:49:11.266478 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b5qtx_2384eec7-0cd1-4bc5-9bc7-b5bb42607c37/kube-multus/1.log" Nov 25 06:49:11 crc kubenswrapper[4482]: I1125 06:49:11.266514 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b5qtx" event={"ID":"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37","Type":"ContainerStarted","Data":"a912979c2425ba11c5085507bce694e01f44b8a323722e10580037b6644c5083"} Nov 25 06:49:11 crc kubenswrapper[4482]: I1125 06:49:11.830317 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:11 crc kubenswrapper[4482]: I1125 06:49:11.830346 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:49:11 crc kubenswrapper[4482]: E1125 06:49:11.830422 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:49:11 crc kubenswrapper[4482]: I1125 06:49:11.830447 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:11 crc kubenswrapper[4482]: I1125 06:49:11.830460 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:11 crc kubenswrapper[4482]: E1125 06:49:11.830518 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:49:11 crc kubenswrapper[4482]: E1125 06:49:11.830559 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:49:11 crc kubenswrapper[4482]: E1125 06:49:11.830609 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:49:13 crc kubenswrapper[4482]: I1125 06:49:13.830613 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:49:13 crc kubenswrapper[4482]: I1125 06:49:13.830714 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:13 crc kubenswrapper[4482]: E1125 06:49:13.830729 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:49:13 crc kubenswrapper[4482]: E1125 06:49:13.830827 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:49:13 crc kubenswrapper[4482]: I1125 06:49:13.830893 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:13 crc kubenswrapper[4482]: E1125 06:49:13.830946 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:49:13 crc kubenswrapper[4482]: I1125 06:49:13.831073 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:13 crc kubenswrapper[4482]: E1125 06:49:13.831138 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:49:14 crc kubenswrapper[4482]: I1125 06:49:14.840295 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:49:15 crc kubenswrapper[4482]: I1125 06:49:15.830740 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:15 crc kubenswrapper[4482]: I1125 06:49:15.830771 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:49:15 crc kubenswrapper[4482]: E1125 06:49:15.831507 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xhh4" podUID="0a1c9846-2a7e-402e-985f-51a244241bd7" Nov 25 06:49:15 crc kubenswrapper[4482]: I1125 06:49:15.831563 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:15 crc kubenswrapper[4482]: E1125 06:49:15.831586 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 06:49:15 crc kubenswrapper[4482]: I1125 06:49:15.831650 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:15 crc kubenswrapper[4482]: E1125 06:49:15.831669 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 06:49:15 crc kubenswrapper[4482]: E1125 06:49:15.831742 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 06:49:17 crc kubenswrapper[4482]: I1125 06:49:17.830593 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:17 crc kubenswrapper[4482]: I1125 06:49:17.830638 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:17 crc kubenswrapper[4482]: I1125 06:49:17.830696 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:49:17 crc kubenswrapper[4482]: I1125 06:49:17.830716 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:17 crc kubenswrapper[4482]: I1125 06:49:17.832429 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 06:49:17 crc kubenswrapper[4482]: I1125 06:49:17.832681 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 06:49:17 crc kubenswrapper[4482]: I1125 06:49:17.833701 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 06:49:17 crc kubenswrapper[4482]: I1125 06:49:17.833956 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 06:49:17 crc kubenswrapper[4482]: I1125 06:49:17.834107 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 06:49:17 crc kubenswrapper[4482]: I1125 06:49:17.834125 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.684797 4482 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.706830 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-78vqp"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.707208 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.710509 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.710529 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.710904 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.711102 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.711139 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.711393 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.713031 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-shnd8"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.713263 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.713619 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zhw8w"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.713931 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.714422 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.714471 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.714772 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.715030 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.715235 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.715259 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.718635 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.718964 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.720183 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.720279 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.720611 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-gqc49"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.720745 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.720829 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.721430 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.722068 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-78b9v"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.722299 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-78b9v" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.722673 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.722701 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.722712 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.722823 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.723123 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.723199 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.723236 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.723514 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.723659 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.723688 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.723882 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.723949 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9ggws"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.723960 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.723973 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.724036 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.724155 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.724228 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.725102 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.725221 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.725316 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.725401 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.725540 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.725593 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.725626 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.725697 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.725734 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.725828 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.726103 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.726224 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.726325 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.726560 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.728372 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f8zk7"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.729019 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.729855 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.730207 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.730900 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.731361 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.731428 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.731535 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.737779 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.737957 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.738043 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.738108 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.738125 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.738431 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.738465 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.738559 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.738658 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.738828 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.744191 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6p2lq"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.744661 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.748914 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.748943 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.748952 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.749123 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.749146 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.749279 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.749423 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.749845 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.749961 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.750024 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.750120 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.749968 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.749996 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.750454 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.750694 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.750868 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9tqlb"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.751309 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.751614 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.751797 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.752090 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.751615 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.752434 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.752445 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.752764 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n56kp"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.753094 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n56kp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.753769 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.753800 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.755185 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.755304 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.755470 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.755615 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.755722 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.755835 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.756107 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.756426 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.756483 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.756734 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.757278 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.757294 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.757410 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.757422 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.757556 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.757636 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.757711 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.758032 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.758109 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.759532 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.759937 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.762602 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fbpdk"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.763026 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.765215 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-78vqp"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.765439 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.765954 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.786442 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/566947cb-8c3e-4ece-b086-98aeec306451-images\") pod \"machine-api-operator-5694c8668f-78vqp\" (UID: \"566947cb-8c3e-4ece-b086-98aeec306451\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.786476 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/566947cb-8c3e-4ece-b086-98aeec306451-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-78vqp\" (UID: \"566947cb-8c3e-4ece-b086-98aeec306451\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.786503 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtjv7\" (UniqueName: \"kubernetes.io/projected/566947cb-8c3e-4ece-b086-98aeec306451-kube-api-access-rtjv7\") pod \"machine-api-operator-5694c8668f-78vqp\" (UID: \"566947cb-8c3e-4ece-b086-98aeec306451\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.786553 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/566947cb-8c3e-4ece-b086-98aeec306451-config\") pod \"machine-api-operator-5694c8668f-78vqp\" (UID: \"566947cb-8c3e-4ece-b086-98aeec306451\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.804203 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.804487 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.805485 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.805991 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.805998 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.806798 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9ggws"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.807802 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-shnd8"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.816208 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gqc49"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.831444 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.831578 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.862377 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.862668 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.863056 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.865108 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.865277 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.866698 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.866858 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zhw8w"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.867401 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6p2lq"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.868235 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-78b9v"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.868757 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.869269 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.869449 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.870203 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.870331 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.870636 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.870751 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.870830 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-26zgh"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.871131 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-v9rqm"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.871610 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-v9rqm" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.871803 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.871898 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.872089 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.878906 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887383 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/566947cb-8c3e-4ece-b086-98aeec306451-images\") pod \"machine-api-operator-5694c8668f-78vqp\" (UID: \"566947cb-8c3e-4ece-b086-98aeec306451\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887417 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887435 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/43f33231-2b25-4a54-87da-e93c8cf3ee18-node-pullsecrets\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887452 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/db2a2377-c791-40ef-80e9-15b3884ec7a4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vn9jt\" (UID: \"db2a2377-c791-40ef-80e9-15b3884ec7a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887471 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/43f33231-2b25-4a54-87da-e93c8cf3ee18-encryption-config\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887488 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/861a93e1-ffca-40f2-ada4-2f736f05ba1c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzgqh\" (UID: \"861a93e1-ffca-40f2-ada4-2f736f05ba1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887502 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887515 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6mmj\" (UniqueName: \"kubernetes.io/projected/43f33231-2b25-4a54-87da-e93c8cf3ee18-kube-api-access-d6mmj\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887530 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887545 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887560 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-oauth-serving-cert\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887580 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-image-import-ca\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887593 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0235c8b-901e-4439-8d57-44af3ea11486-config\") pod \"kube-apiserver-operator-766d6c64bb-j675n\" (UID: \"d0235c8b-901e-4439-8d57-44af3ea11486\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887606 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0235c8b-901e-4439-8d57-44af3ea11486-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-j675n\" (UID: \"d0235c8b-901e-4439-8d57-44af3ea11486\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887620 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/587f32ef-b1da-4e40-a1bc-33ba39c207e8-config\") pod \"route-controller-manager-6576b87f9c-qbn2w\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887634 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ee0a1d1-8292-47bf-885b-a154443af6f4-serving-cert\") pod \"openshift-config-operator-7777fb866f-9ggws\" (UID: \"1ee0a1d1-8292-47bf-885b-a154443af6f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887646 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b-config\") pod \"console-operator-58897d9998-9tqlb\" (UID: \"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b\") " pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887661 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8kzh\" (UniqueName: \"kubernetes.io/projected/15832b7c-8637-457d-bf40-c9d8ae03445d-kube-api-access-b8kzh\") pod \"dns-operator-744455d44c-n56kp\" (UID: \"15832b7c-8637-457d-bf40-c9d8ae03445d\") " pod="openshift-dns-operator/dns-operator-744455d44c-n56kp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887675 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-etcd-serving-ca\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887688 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-config\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887703 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40242495-a63d-4300-b420-f7eb4317ea0e-trusted-ca\") pod \"ingress-operator-5b745b69d9-7zhtl\" (UID: \"40242495-a63d-4300-b420-f7eb4317ea0e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887715 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887730 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887745 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhkxx\" (UniqueName: \"kubernetes.io/projected/861a93e1-ffca-40f2-ada4-2f736f05ba1c-kube-api-access-qhkxx\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzgqh\" (UID: \"861a93e1-ffca-40f2-ada4-2f736f05ba1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887759 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/587f32ef-b1da-4e40-a1bc-33ba39c207e8-serving-cert\") pod \"route-controller-manager-6576b87f9c-qbn2w\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887774 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2a2377-c791-40ef-80e9-15b3884ec7a4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vn9jt\" (UID: \"db2a2377-c791-40ef-80e9-15b3884ec7a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887789 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887801 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-audit\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887814 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40242495-a63d-4300-b420-f7eb4317ea0e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7zhtl\" (UID: \"40242495-a63d-4300-b420-f7eb4317ea0e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887838 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61e22994-72d9-477f-8f3f-89a77ade8196-audit-dir\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887859 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2057b44-f9f5-426d-ac80-b3c576dcb59c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6t25z\" (UID: \"f2057b44-f9f5-426d-ac80-b3c576dcb59c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887872 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czmhp\" (UniqueName: \"kubernetes.io/projected/40242495-a63d-4300-b420-f7eb4317ea0e-kube-api-access-czmhp\") pod \"ingress-operator-5b745b69d9-7zhtl\" (UID: \"40242495-a63d-4300-b420-f7eb4317ea0e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887891 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/566947cb-8c3e-4ece-b086-98aeec306451-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-78vqp\" (UID: \"566947cb-8c3e-4ece-b086-98aeec306451\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887907 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hkp7\" (UniqueName: \"kubernetes.io/projected/1ee0a1d1-8292-47bf-885b-a154443af6f4-kube-api-access-2hkp7\") pod \"openshift-config-operator-7777fb866f-9ggws\" (UID: \"1ee0a1d1-8292-47bf-885b-a154443af6f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887937 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887952 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp4sw\" (UniqueName: \"kubernetes.io/projected/db2a2377-c791-40ef-80e9-15b3884ec7a4-kube-api-access-rp4sw\") pod \"cluster-image-registry-operator-dc59b4c8b-vn9jt\" (UID: \"db2a2377-c791-40ef-80e9-15b3884ec7a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887971 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.887986 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/861a93e1-ffca-40f2-ada4-2f736f05ba1c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzgqh\" (UID: \"861a93e1-ffca-40f2-ada4-2f736f05ba1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888001 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5pfr\" (UniqueName: \"kubernetes.io/projected/61e22994-72d9-477f-8f3f-89a77ade8196-kube-api-access-g5pfr\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888014 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43f33231-2b25-4a54-87da-e93c8cf3ee18-etcd-client\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888028 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b875n\" (UniqueName: \"kubernetes.io/projected/32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b-kube-api-access-b875n\") pod \"console-operator-58897d9998-9tqlb\" (UID: \"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b\") " pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888042 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9zw8\" (UniqueName: \"kubernetes.io/projected/368e9f64-0e31-464e-9714-b4b3ea73cc36-kube-api-access-z9zw8\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888058 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db2a2377-c791-40ef-80e9-15b3884ec7a4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vn9jt\" (UID: \"db2a2377-c791-40ef-80e9-15b3884ec7a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888071 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-serving-cert\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888085 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-oauth-config\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888099 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtjv7\" (UniqueName: \"kubernetes.io/projected/566947cb-8c3e-4ece-b086-98aeec306451-kube-api-access-rtjv7\") pod \"machine-api-operator-5694c8668f-78vqp\" (UID: \"566947cb-8c3e-4ece-b086-98aeec306451\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888116 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1ee0a1d1-8292-47bf-885b-a154443af6f4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9ggws\" (UID: \"1ee0a1d1-8292-47bf-885b-a154443af6f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888132 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-audit-policies\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888148 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888163 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/587f32ef-b1da-4e40-a1bc-33ba39c207e8-client-ca\") pod \"route-controller-manager-6576b87f9c-qbn2w\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888190 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43f33231-2b25-4a54-87da-e93c8cf3ee18-serving-cert\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888204 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b-serving-cert\") pod \"console-operator-58897d9998-9tqlb\" (UID: \"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b\") " pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888212 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/566947cb-8c3e-4ece-b086-98aeec306451-images\") pod \"machine-api-operator-5694c8668f-78vqp\" (UID: \"566947cb-8c3e-4ece-b086-98aeec306451\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888218 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b-trusted-ca\") pod \"console-operator-58897d9998-9tqlb\" (UID: \"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b\") " pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888335 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cgg4\" (UniqueName: \"kubernetes.io/projected/f2057b44-f9f5-426d-ac80-b3c576dcb59c-kube-api-access-4cgg4\") pod \"cluster-samples-operator-665b6dd947-6t25z\" (UID: \"f2057b44-f9f5-426d-ac80-b3c576dcb59c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888518 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43f33231-2b25-4a54-87da-e93c8cf3ee18-audit-dir\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888535 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888562 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qg5\" (UniqueName: \"kubernetes.io/projected/13c2044e-5435-4487-be5b-fafa43b6db3a-kube-api-access-n9qg5\") pod \"downloads-7954f5f757-78b9v\" (UID: \"13c2044e-5435-4487-be5b-fafa43b6db3a\") " pod="openshift-console/downloads-7954f5f757-78b9v" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888578 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/15832b7c-8637-457d-bf40-c9d8ae03445d-metrics-tls\") pod \"dns-operator-744455d44c-n56kp\" (UID: \"15832b7c-8637-457d-bf40-c9d8ae03445d\") " pod="openshift-dns-operator/dns-operator-744455d44c-n56kp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888608 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0235c8b-901e-4439-8d57-44af3ea11486-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-j675n\" (UID: \"d0235c8b-901e-4439-8d57-44af3ea11486\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888623 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-config\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888641 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-trusted-ca-bundle\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888655 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/566947cb-8c3e-4ece-b086-98aeec306451-config\") pod \"machine-api-operator-5694c8668f-78vqp\" (UID: \"566947cb-8c3e-4ece-b086-98aeec306451\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888684 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888699 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/40242495-a63d-4300-b420-f7eb4317ea0e-metrics-tls\") pod \"ingress-operator-5b745b69d9-7zhtl\" (UID: \"40242495-a63d-4300-b420-f7eb4317ea0e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888713 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-service-ca\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888742 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26tg\" (UniqueName: \"kubernetes.io/projected/587f32ef-b1da-4e40-a1bc-33ba39c207e8-kube-api-access-x26tg\") pod \"route-controller-manager-6576b87f9c-qbn2w\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.888830 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.889478 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/566947cb-8c3e-4ece-b086-98aeec306451-config\") pod \"machine-api-operator-5694c8668f-78vqp\" (UID: \"566947cb-8c3e-4ece-b086-98aeec306451\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.891766 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.893050 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.893342 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.893377 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.893576 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5w6bs"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.893631 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.893659 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.894291 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.897972 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.907222 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.907868 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.908369 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.909047 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-58h2l"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.909891 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-6czb8"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.910338 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.910438 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gvbtp"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.910558 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.911354 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.911596 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.911639 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.911894 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-58h2l" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.913164 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/566947cb-8c3e-4ece-b086-98aeec306451-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-78vqp\" (UID: \"566947cb-8c3e-4ece-b086-98aeec306451\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.914391 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.915318 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.932905 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.933532 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.933560 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2h8cx"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.933621 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.933953 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.934224 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.934305 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.935275 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f8zk7"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.936199 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9tqlb"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.937033 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.937428 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.939607 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.940103 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.944269 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.944997 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.945980 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.948353 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.954526 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fbpdk"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.956275 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n56kp"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.957122 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.958020 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.958824 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-26zgh"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.960088 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gp4k7"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.960799 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.960833 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.961887 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-58h2l"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.962611 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-v9rqm"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.963460 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2h8cx"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.964592 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.965376 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.966639 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.968123 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.968303 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.970018 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.971009 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5w6bs"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.972003 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.972946 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.974729 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gp4k7"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.976143 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gvbtp"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.976665 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.977346 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ng9pj"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.977764 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ng9pj" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.978258 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-b248r"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.979318 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-b248r"] Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.979372 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-b248r" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989218 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989242 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/587f32ef-b1da-4e40-a1bc-33ba39c207e8-client-ca\") pod \"route-controller-manager-6576b87f9c-qbn2w\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989259 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43f33231-2b25-4a54-87da-e93c8cf3ee18-serving-cert\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989273 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b-serving-cert\") pod \"console-operator-58897d9998-9tqlb\" (UID: \"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b\") " pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989293 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b-trusted-ca\") pod \"console-operator-58897d9998-9tqlb\" (UID: \"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b\") " pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989307 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cgg4\" (UniqueName: \"kubernetes.io/projected/f2057b44-f9f5-426d-ac80-b3c576dcb59c-kube-api-access-4cgg4\") pod \"cluster-samples-operator-665b6dd947-6t25z\" (UID: \"f2057b44-f9f5-426d-ac80-b3c576dcb59c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989321 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43f33231-2b25-4a54-87da-e93c8cf3ee18-audit-dir\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989336 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989350 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9qg5\" (UniqueName: \"kubernetes.io/projected/13c2044e-5435-4487-be5b-fafa43b6db3a-kube-api-access-n9qg5\") pod \"downloads-7954f5f757-78b9v\" (UID: \"13c2044e-5435-4487-be5b-fafa43b6db3a\") " pod="openshift-console/downloads-7954f5f757-78b9v" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989363 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/15832b7c-8637-457d-bf40-c9d8ae03445d-metrics-tls\") pod \"dns-operator-744455d44c-n56kp\" (UID: \"15832b7c-8637-457d-bf40-c9d8ae03445d\") " pod="openshift-dns-operator/dns-operator-744455d44c-n56kp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989378 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0235c8b-901e-4439-8d57-44af3ea11486-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-j675n\" (UID: \"d0235c8b-901e-4439-8d57-44af3ea11486\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989391 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-config\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989404 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-trusted-ca-bundle\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989428 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989442 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/40242495-a63d-4300-b420-f7eb4317ea0e-metrics-tls\") pod \"ingress-operator-5b745b69d9-7zhtl\" (UID: \"40242495-a63d-4300-b420-f7eb4317ea0e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989455 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-service-ca\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989471 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x26tg\" (UniqueName: \"kubernetes.io/projected/587f32ef-b1da-4e40-a1bc-33ba39c207e8-kube-api-access-x26tg\") pod \"route-controller-manager-6576b87f9c-qbn2w\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989488 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989501 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/43f33231-2b25-4a54-87da-e93c8cf3ee18-node-pullsecrets\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989515 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/db2a2377-c791-40ef-80e9-15b3884ec7a4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vn9jt\" (UID: \"db2a2377-c791-40ef-80e9-15b3884ec7a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989529 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/861a93e1-ffca-40f2-ada4-2f736f05ba1c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzgqh\" (UID: \"861a93e1-ffca-40f2-ada4-2f736f05ba1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989543 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/43f33231-2b25-4a54-87da-e93c8cf3ee18-encryption-config\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989556 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989570 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6mmj\" (UniqueName: \"kubernetes.io/projected/43f33231-2b25-4a54-87da-e93c8cf3ee18-kube-api-access-d6mmj\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989583 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989604 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989617 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-oauth-serving-cert\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989632 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0235c8b-901e-4439-8d57-44af3ea11486-config\") pod \"kube-apiserver-operator-766d6c64bb-j675n\" (UID: \"d0235c8b-901e-4439-8d57-44af3ea11486\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989644 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0235c8b-901e-4439-8d57-44af3ea11486-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-j675n\" (UID: \"d0235c8b-901e-4439-8d57-44af3ea11486\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989656 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/587f32ef-b1da-4e40-a1bc-33ba39c207e8-config\") pod \"route-controller-manager-6576b87f9c-qbn2w\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989670 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-image-import-ca\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989683 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ee0a1d1-8292-47bf-885b-a154443af6f4-serving-cert\") pod \"openshift-config-operator-7777fb866f-9ggws\" (UID: \"1ee0a1d1-8292-47bf-885b-a154443af6f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989697 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-etcd-serving-ca\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989711 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b-config\") pod \"console-operator-58897d9998-9tqlb\" (UID: \"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b\") " pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989725 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8kzh\" (UniqueName: \"kubernetes.io/projected/15832b7c-8637-457d-bf40-c9d8ae03445d-kube-api-access-b8kzh\") pod \"dns-operator-744455d44c-n56kp\" (UID: \"15832b7c-8637-457d-bf40-c9d8ae03445d\") " pod="openshift-dns-operator/dns-operator-744455d44c-n56kp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989738 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40242495-a63d-4300-b420-f7eb4317ea0e-trusted-ca\") pod \"ingress-operator-5b745b69d9-7zhtl\" (UID: \"40242495-a63d-4300-b420-f7eb4317ea0e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989751 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-config\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989764 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989777 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989790 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989805 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhkxx\" (UniqueName: \"kubernetes.io/projected/861a93e1-ffca-40f2-ada4-2f736f05ba1c-kube-api-access-qhkxx\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzgqh\" (UID: \"861a93e1-ffca-40f2-ada4-2f736f05ba1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989818 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/587f32ef-b1da-4e40-a1bc-33ba39c207e8-serving-cert\") pod \"route-controller-manager-6576b87f9c-qbn2w\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989832 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2a2377-c791-40ef-80e9-15b3884ec7a4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vn9jt\" (UID: \"db2a2377-c791-40ef-80e9-15b3884ec7a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989846 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40242495-a63d-4300-b420-f7eb4317ea0e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7zhtl\" (UID: \"40242495-a63d-4300-b420-f7eb4317ea0e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989857 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-audit\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989883 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61e22994-72d9-477f-8f3f-89a77ade8196-audit-dir\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989898 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2057b44-f9f5-426d-ac80-b3c576dcb59c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6t25z\" (UID: \"f2057b44-f9f5-426d-ac80-b3c576dcb59c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989911 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czmhp\" (UniqueName: \"kubernetes.io/projected/40242495-a63d-4300-b420-f7eb4317ea0e-kube-api-access-czmhp\") pod \"ingress-operator-5b745b69d9-7zhtl\" (UID: \"40242495-a63d-4300-b420-f7eb4317ea0e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989936 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hkp7\" (UniqueName: \"kubernetes.io/projected/1ee0a1d1-8292-47bf-885b-a154443af6f4-kube-api-access-2hkp7\") pod \"openshift-config-operator-7777fb866f-9ggws\" (UID: \"1ee0a1d1-8292-47bf-885b-a154443af6f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989951 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989966 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp4sw\" (UniqueName: \"kubernetes.io/projected/db2a2377-c791-40ef-80e9-15b3884ec7a4-kube-api-access-rp4sw\") pod \"cluster-image-registry-operator-dc59b4c8b-vn9jt\" (UID: \"db2a2377-c791-40ef-80e9-15b3884ec7a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989982 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.989997 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/861a93e1-ffca-40f2-ada4-2f736f05ba1c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzgqh\" (UID: \"861a93e1-ffca-40f2-ada4-2f736f05ba1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.990013 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5pfr\" (UniqueName: \"kubernetes.io/projected/61e22994-72d9-477f-8f3f-89a77ade8196-kube-api-access-g5pfr\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.990025 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43f33231-2b25-4a54-87da-e93c8cf3ee18-etcd-client\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.990038 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b875n\" (UniqueName: \"kubernetes.io/projected/32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b-kube-api-access-b875n\") pod \"console-operator-58897d9998-9tqlb\" (UID: \"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b\") " pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.990052 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9zw8\" (UniqueName: \"kubernetes.io/projected/368e9f64-0e31-464e-9714-b4b3ea73cc36-kube-api-access-z9zw8\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.990066 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db2a2377-c791-40ef-80e9-15b3884ec7a4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vn9jt\" (UID: \"db2a2377-c791-40ef-80e9-15b3884ec7a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.990083 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1ee0a1d1-8292-47bf-885b-a154443af6f4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9ggws\" (UID: \"1ee0a1d1-8292-47bf-885b-a154443af6f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.990097 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-audit-policies\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.990110 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-serving-cert\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.990124 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-oauth-config\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.990460 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.990752 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-service-ca\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.991802 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-trusted-ca-bundle\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.993137 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.993739 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.994432 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-oauth-config\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.994662 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-oauth-serving-cert\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.994680 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/43f33231-2b25-4a54-87da-e93c8cf3ee18-encryption-config\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.995428 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0235c8b-901e-4439-8d57-44af3ea11486-config\") pod \"kube-apiserver-operator-766d6c64bb-j675n\" (UID: \"d0235c8b-901e-4439-8d57-44af3ea11486\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.995649 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b-serving-cert\") pod \"console-operator-58897d9998-9tqlb\" (UID: \"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b\") " pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.995956 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.996215 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/15832b7c-8637-457d-bf40-c9d8ae03445d-metrics-tls\") pod \"dns-operator-744455d44c-n56kp\" (UID: \"15832b7c-8637-457d-bf40-c9d8ae03445d\") " pod="openshift-dns-operator/dns-operator-744455d44c-n56kp" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.996755 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61e22994-72d9-477f-8f3f-89a77ade8196-audit-dir\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.996899 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.996903 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.996975 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/43f33231-2b25-4a54-87da-e93c8cf3ee18-node-pullsecrets\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.997420 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.997720 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/587f32ef-b1da-4e40-a1bc-33ba39c207e8-config\") pod \"route-controller-manager-6576b87f9c-qbn2w\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.997798 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-image-import-ca\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.998482 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-audit\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.999014 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-config\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.999022 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/587f32ef-b1da-4e40-a1bc-33ba39c207e8-client-ca\") pod \"route-controller-manager-6576b87f9c-qbn2w\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.999804 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:18 crc kubenswrapper[4482]: I1125 06:49:18.999898 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:18.999991 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/db2a2377-c791-40ef-80e9-15b3884ec7a4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vn9jt\" (UID: \"db2a2377-c791-40ef-80e9-15b3884ec7a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.000467 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/861a93e1-ffca-40f2-ada4-2f736f05ba1c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzgqh\" (UID: \"861a93e1-ffca-40f2-ada4-2f736f05ba1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.000869 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-config\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.001323 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2057b44-f9f5-426d-ac80-b3c576dcb59c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6t25z\" (UID: \"f2057b44-f9f5-426d-ac80-b3c576dcb59c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.001609 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0235c8b-901e-4439-8d57-44af3ea11486-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-j675n\" (UID: \"d0235c8b-901e-4439-8d57-44af3ea11486\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.002228 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43f33231-2b25-4a54-87da-e93c8cf3ee18-audit-dir\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.003004 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1ee0a1d1-8292-47bf-885b-a154443af6f4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9ggws\" (UID: \"1ee0a1d1-8292-47bf-885b-a154443af6f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.003801 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ee0a1d1-8292-47bf-885b-a154443af6f4-serving-cert\") pod \"openshift-config-operator-7777fb866f-9ggws\" (UID: \"1ee0a1d1-8292-47bf-885b-a154443af6f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.003882 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b-config\") pod \"console-operator-58897d9998-9tqlb\" (UID: \"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b\") " pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.003866 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43f33231-2b25-4a54-87da-e93c8cf3ee18-serving-cert\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.004400 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/861a93e1-ffca-40f2-ada4-2f736f05ba1c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzgqh\" (UID: \"861a93e1-ffca-40f2-ada4-2f736f05ba1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.004981 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/587f32ef-b1da-4e40-a1bc-33ba39c207e8-serving-cert\") pod \"route-controller-manager-6576b87f9c-qbn2w\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.005141 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b-trusted-ca\") pod \"console-operator-58897d9998-9tqlb\" (UID: \"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b\") " pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.005892 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.006033 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.006069 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-audit-policies\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.006444 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43f33231-2b25-4a54-87da-e93c8cf3ee18-etcd-client\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.006453 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/43f33231-2b25-4a54-87da-e93c8cf3ee18-etcd-serving-ca\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.006600 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2a2377-c791-40ef-80e9-15b3884ec7a4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vn9jt\" (UID: \"db2a2377-c791-40ef-80e9-15b3884ec7a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.006657 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.007599 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-serving-cert\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.008674 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.009280 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.028197 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.048974 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.076300 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.079220 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40242495-a63d-4300-b420-f7eb4317ea0e-trusted-ca\") pod \"ingress-operator-5b745b69d9-7zhtl\" (UID: \"40242495-a63d-4300-b420-f7eb4317ea0e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.089330 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.109162 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.113231 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/40242495-a63d-4300-b420-f7eb4317ea0e-metrics-tls\") pod \"ingress-operator-5b745b69d9-7zhtl\" (UID: \"40242495-a63d-4300-b420-f7eb4317ea0e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.129031 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.148986 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.189345 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.209276 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.229547 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.269213 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.291136 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.309014 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.328366 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.348769 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.368581 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.388751 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.409269 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.428588 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.459843 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtjv7\" (UniqueName: \"kubernetes.io/projected/566947cb-8c3e-4ece-b086-98aeec306451-kube-api-access-rtjv7\") pod \"machine-api-operator-5694c8668f-78vqp\" (UID: \"566947cb-8c3e-4ece-b086-98aeec306451\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.468391 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.489326 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.509410 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.529020 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.549196 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.568525 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.588441 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.609324 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.620182 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.628413 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.648243 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.669270 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.689709 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.709019 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.730010 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.731581 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-78vqp"] Nov 25 06:49:19 crc kubenswrapper[4482]: W1125 06:49:19.736354 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod566947cb_8c3e_4ece_b086_98aeec306451.slice/crio-3d0ccc1e7b4ea129bcd527af95a50263629a453b876a8a55b8d3bd2ea438b738 WatchSource:0}: Error finding container 3d0ccc1e7b4ea129bcd527af95a50263629a453b876a8a55b8d3bd2ea438b738: Status 404 returned error can't find the container with id 3d0ccc1e7b4ea129bcd527af95a50263629a453b876a8a55b8d3bd2ea438b738 Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.749247 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.768981 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.789100 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.808824 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.829028 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.849360 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.869228 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.888692 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.909115 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.928286 4482 request.go:700] Waited for 1.016705547s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.930617 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.948997 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.969117 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 06:49:19 crc kubenswrapper[4482]: I1125 06:49:19.988295 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.008776 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.029076 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.048830 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.069295 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.088505 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.109049 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.128861 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.149547 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.168639 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.188827 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.208424 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.229310 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.248287 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.269115 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.286910 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" event={"ID":"566947cb-8c3e-4ece-b086-98aeec306451","Type":"ContainerStarted","Data":"a8cbf194201dbf6139d601d99fb392fa68cb9d522c103c875d9857a5e04b7de6"} Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.286950 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" event={"ID":"566947cb-8c3e-4ece-b086-98aeec306451","Type":"ContainerStarted","Data":"1cd66c178569a1b5885d3f5463a83969978624c761330eade74f5c2d8e53c9c9"} Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.286962 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" event={"ID":"566947cb-8c3e-4ece-b086-98aeec306451","Type":"ContainerStarted","Data":"3d0ccc1e7b4ea129bcd527af95a50263629a453b876a8a55b8d3bd2ea438b738"} Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.288976 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.309325 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.328717 4482 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.349159 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.368712 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.388671 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.408788 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.428380 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.448374 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.468276 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.495020 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.508912 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.528571 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.549048 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.568702 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.589090 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.608247 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.628471 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.648543 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.669312 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.689413 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.709001 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.728833 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.748895 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.768471 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.802789 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0235c8b-901e-4439-8d57-44af3ea11486-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-j675n\" (UID: \"d0235c8b-901e-4439-8d57-44af3ea11486\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.820532 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9qg5\" (UniqueName: \"kubernetes.io/projected/13c2044e-5435-4487-be5b-fafa43b6db3a-kube-api-access-n9qg5\") pod \"downloads-7954f5f757-78b9v\" (UID: \"13c2044e-5435-4487-be5b-fafa43b6db3a\") " pod="openshift-console/downloads-7954f5f757-78b9v" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.839610 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6mmj\" (UniqueName: \"kubernetes.io/projected/43f33231-2b25-4a54-87da-e93c8cf3ee18-kube-api-access-d6mmj\") pod \"apiserver-76f77b778f-6p2lq\" (UID: \"43f33231-2b25-4a54-87da-e93c8cf3ee18\") " pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.859687 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x26tg\" (UniqueName: \"kubernetes.io/projected/587f32ef-b1da-4e40-a1bc-33ba39c207e8-kube-api-access-x26tg\") pod \"route-controller-manager-6576b87f9c-qbn2w\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.878942 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8kzh\" (UniqueName: \"kubernetes.io/projected/15832b7c-8637-457d-bf40-c9d8ae03445d-kube-api-access-b8kzh\") pod \"dns-operator-744455d44c-n56kp\" (UID: \"15832b7c-8637-457d-bf40-c9d8ae03445d\") " pod="openshift-dns-operator/dns-operator-744455d44c-n56kp" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.886918 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.901098 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czmhp\" (UniqueName: \"kubernetes.io/projected/40242495-a63d-4300-b420-f7eb4317ea0e-kube-api-access-czmhp\") pod \"ingress-operator-5b745b69d9-7zhtl\" (UID: \"40242495-a63d-4300-b420-f7eb4317ea0e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.902786 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-78b9v" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.921100 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hkp7\" (UniqueName: \"kubernetes.io/projected/1ee0a1d1-8292-47bf-885b-a154443af6f4-kube-api-access-2hkp7\") pod \"openshift-config-operator-7777fb866f-9ggws\" (UID: \"1ee0a1d1-8292-47bf-885b-a154443af6f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.942257 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9zw8\" (UniqueName: \"kubernetes.io/projected/368e9f64-0e31-464e-9714-b4b3ea73cc36-kube-api-access-z9zw8\") pod \"console-f9d7485db-gqc49\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.948249 4482 request.go:700] Waited for 1.947364895s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.963966 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp4sw\" (UniqueName: \"kubernetes.io/projected/db2a2377-c791-40ef-80e9-15b3884ec7a4-kube-api-access-rp4sw\") pod \"cluster-image-registry-operator-dc59b4c8b-vn9jt\" (UID: \"db2a2377-c791-40ef-80e9-15b3884ec7a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.982979 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db2a2377-c791-40ef-80e9-15b3884ec7a4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vn9jt\" (UID: \"db2a2377-c791-40ef-80e9-15b3884ec7a4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:20 crc kubenswrapper[4482]: I1125 06:49:20.984791 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.003349 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5pfr\" (UniqueName: \"kubernetes.io/projected/61e22994-72d9-477f-8f3f-89a77ade8196-kube-api-access-g5pfr\") pod \"oauth-openshift-558db77b4-f8zk7\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.022702 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cgg4\" (UniqueName: \"kubernetes.io/projected/f2057b44-f9f5-426d-ac80-b3c576dcb59c-kube-api-access-4cgg4\") pod \"cluster-samples-operator-665b6dd947-6t25z\" (UID: \"f2057b44-f9f5-426d-ac80-b3c576dcb59c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.027211 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.033441 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z" Nov 25 06:49:21 crc kubenswrapper[4482]: W1125 06:49:21.034386 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod587f32ef_b1da_4e40_a1bc_33ba39c207e8.slice/crio-8d5c3f2b70beeae3d0a6c71c01ba202855c7c51a913cf8c882b07082b3fed232 WatchSource:0}: Error finding container 8d5c3f2b70beeae3d0a6c71c01ba202855c7c51a913cf8c882b07082b3fed232: Status 404 returned error can't find the container with id 8d5c3f2b70beeae3d0a6c71c01ba202855c7c51a913cf8c882b07082b3fed232 Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.041804 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b875n\" (UniqueName: \"kubernetes.io/projected/32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b-kube-api-access-b875n\") pod \"console-operator-58897d9998-9tqlb\" (UID: \"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b\") " pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.043449 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.048660 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n56kp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.055901 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-78b9v"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.057520 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.066634 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhkxx\" (UniqueName: \"kubernetes.io/projected/861a93e1-ffca-40f2-ada4-2f736f05ba1c-kube-api-access-qhkxx\") pod \"openshift-controller-manager-operator-756b6f6bc6-dzgqh\" (UID: \"861a93e1-ffca-40f2-ada4-2f736f05ba1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.082814 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40242495-a63d-4300-b420-f7eb4317ea0e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7zhtl\" (UID: \"40242495-a63d-4300-b420-f7eb4317ea0e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122210 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b992fb6-c183-4a39-9438-9ae970028bbf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gnvtm\" (UID: \"1b992fb6-c183-4a39-9438-9ae970028bbf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122242 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-bound-sa-token\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122269 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzxs6\" (UniqueName: \"kubernetes.io/projected/00ecd959-d344-450d-91de-06136bac3d80-kube-api-access-bzxs6\") pod \"machine-approver-56656f9798-p9xxv\" (UID: \"00ecd959-d344-450d-91de-06136bac3d80\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122287 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef330858-933c-41ce-b34b-db48cd8e8200-serving-cert\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122301 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9xf\" (UniqueName: \"kubernetes.io/projected/ef330858-933c-41ce-b34b-db48cd8e8200-kube-api-access-bj9xf\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122320 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122334 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-557lf\" (UniqueName: \"kubernetes.io/projected/703d9af4-44eb-40f1-a27f-87668bec5700-kube-api-access-557lf\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122352 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b992fb6-c183-4a39-9438-9ae970028bbf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gnvtm\" (UID: \"1b992fb6-c183-4a39-9438-9ae970028bbf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122382 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/79ca89d9-d18a-4927-9c58-47754973b8ed-audit-policies\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122395 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00ecd959-d344-450d-91de-06136bac3d80-config\") pod \"machine-approver-56656f9798-p9xxv\" (UID: \"00ecd959-d344-450d-91de-06136bac3d80\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122410 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-client-ca\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122427 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-registry-tls\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122441 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00ecd959-d344-450d-91de-06136bac3d80-auth-proxy-config\") pod \"machine-approver-56656f9798-p9xxv\" (UID: \"00ecd959-d344-450d-91de-06136bac3d80\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122456 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-trusted-ca\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122471 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rll2z\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-kube-api-access-rll2z\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122491 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/79ca89d9-d18a-4927-9c58-47754973b8ed-encryption-config\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122508 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79ca89d9-d18a-4927-9c58-47754973b8ed-serving-cert\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122528 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-registry-certificates\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122540 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/00ecd959-d344-450d-91de-06136bac3d80-machine-approver-tls\") pod \"machine-approver-56656f9798-p9xxv\" (UID: \"00ecd959-d344-450d-91de-06136bac3d80\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122555 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/79ca89d9-d18a-4927-9c58-47754973b8ed-etcd-client\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122569 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmknd\" (UniqueName: \"kubernetes.io/projected/1b992fb6-c183-4a39-9438-9ae970028bbf-kube-api-access-cmknd\") pod \"openshift-apiserver-operator-796bbdcf4f-gnvtm\" (UID: \"1b992fb6-c183-4a39-9438-9ae970028bbf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122593 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122608 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7cht\" (UniqueName: \"kubernetes.io/projected/79ca89d9-d18a-4927-9c58-47754973b8ed-kube-api-access-k7cht\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122622 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/703d9af4-44eb-40f1-a27f-87668bec5700-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122646 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122660 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122674 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/79ca89d9-d18a-4927-9c58-47754973b8ed-audit-dir\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122692 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/79ca89d9-d18a-4927-9c58-47754973b8ed-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122705 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79ca89d9-d18a-4927-9c58-47754973b8ed-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122719 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/703d9af4-44eb-40f1-a27f-87668bec5700-serving-cert\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122736 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-config\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122752 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703d9af4-44eb-40f1-a27f-87668bec5700-config\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.122774 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/703d9af4-44eb-40f1-a27f-87668bec5700-service-ca-bundle\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: E1125 06:49:21.123240 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:21.623223579 +0000 UTC m=+136.111454838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.166915 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6p2lq"] Nov 25 06:49:21 crc kubenswrapper[4482]: W1125 06:49:21.177555 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43f33231_2b25_4a54_87da_e93c8cf3ee18.slice/crio-59f1194725db55e662ca018a375ef3096924abafb1916b51afcac9f4abab8e78 WatchSource:0}: Error finding container 59f1194725db55e662ca018a375ef3096924abafb1916b51afcac9f4abab8e78: Status 404 returned error can't find the container with id 59f1194725db55e662ca018a375ef3096924abafb1916b51afcac9f4abab8e78 Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.199447 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.215963 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.225643 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.225791 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d8890596-b9fd-4710-9293-687c209c6090-srv-cert\") pod \"olm-operator-6b444d44fb-djrs9\" (UID: \"d8890596-b9fd-4710-9293-687c209c6090\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.225811 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c6901f70-de25-46df-a04b-7e1dcb979454-proxy-tls\") pod \"machine-config-controller-84d6567774-fv75f\" (UID: \"c6901f70-de25-46df-a04b-7e1dcb979454\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.225853 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ebdc3669-daa5-4220-9042-265024c56738-etcd-service-ca\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.225884 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b992fb6-c183-4a39-9438-9ae970028bbf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gnvtm\" (UID: \"1b992fb6-c183-4a39-9438-9ae970028bbf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.225899 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2b7e856-0bf2-44b9-868c-8181204573c4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-v9rqm\" (UID: \"e2b7e856-0bf2-44b9-868c-8181204573c4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v9rqm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.225944 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef330858-933c-41ce-b34b-db48cd8e8200-serving-cert\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.225959 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9xf\" (UniqueName: \"kubernetes.io/projected/ef330858-933c-41ce-b34b-db48cd8e8200-kube-api-access-bj9xf\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.225974 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d8890596-b9fd-4710-9293-687c209c6090-profile-collector-cert\") pod \"olm-operator-6b444d44fb-djrs9\" (UID: \"d8890596-b9fd-4710-9293-687c209c6090\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.225991 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2452c3b9-85cf-4e8e-a20f-3adf5fb602c5-certs\") pod \"machine-config-server-ng9pj\" (UID: \"2452c3b9-85cf-4e8e-a20f-3adf5fb602c5\") " pod="openshift-machine-config-operator/machine-config-server-ng9pj" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226036 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226061 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-557lf\" (UniqueName: \"kubernetes.io/projected/703d9af4-44eb-40f1-a27f-87668bec5700-kube-api-access-557lf\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226108 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf220ddc-cabc-43db-8281-d9304d65c625-proxy-tls\") pod \"machine-config-operator-74547568cd-5djwl\" (UID: \"cf220ddc-cabc-43db-8281-d9304d65c625\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226132 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nn8w\" (UniqueName: \"kubernetes.io/projected/e2b7e856-0bf2-44b9-868c-8181204573c4-kube-api-access-9nn8w\") pod \"multus-admission-controller-857f4d67dd-v9rqm\" (UID: \"e2b7e856-0bf2-44b9-868c-8181204573c4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v9rqm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226147 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f1e7a69-3cac-4d41-9fa2-72f14d7171be-cert\") pod \"ingress-canary-b248r\" (UID: \"2f1e7a69-3cac-4d41-9fa2-72f14d7171be\") " pod="openshift-ingress-canary/ingress-canary-b248r" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226217 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48ceac98-86e6-40c5-842f-775af04e420a-metrics-tls\") pod \"dns-default-gp4k7\" (UID: \"48ceac98-86e6-40c5-842f-775af04e420a\") " pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226235 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/79ca89d9-d18a-4927-9c58-47754973b8ed-audit-policies\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226248 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00ecd959-d344-450d-91de-06136bac3d80-config\") pod \"machine-approver-56656f9798-p9xxv\" (UID: \"00ecd959-d344-450d-91de-06136bac3d80\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226283 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-metrics-certs\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226299 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-client-ca\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226320 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-registry-tls\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226354 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c097a8f-db6e-4f47-b014-1c9c75a92ad8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8j88\" (UID: \"4c097a8f-db6e-4f47-b014-1c9c75a92ad8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226394 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj6xg\" (UniqueName: \"kubernetes.io/projected/ec6261f9-cc3f-4940-9144-7617d2b81676-kube-api-access-pj6xg\") pod \"catalog-operator-68c6474976-689dm\" (UID: \"ec6261f9-cc3f-4940-9144-7617d2b81676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226445 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-trusted-ca\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226463 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdq4x\" (UniqueName: \"kubernetes.io/projected/4c097a8f-db6e-4f47-b014-1c9c75a92ad8-kube-api-access-vdq4x\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8j88\" (UID: \"4c097a8f-db6e-4f47-b014-1c9c75a92ad8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226515 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6901f70-de25-46df-a04b-7e1dcb979454-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fv75f\" (UID: \"c6901f70-de25-46df-a04b-7e1dcb979454\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226540 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-socket-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: E1125 06:49:21.226566 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:21.726549219 +0000 UTC m=+136.214780478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226600 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebdc3669-daa5-4220-9042-265024c56738-etcd-client\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226630 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ec6261f9-cc3f-4940-9144-7617d2b81676-profile-collector-cert\") pod \"catalog-operator-68c6474976-689dm\" (UID: \"ec6261f9-cc3f-4940-9144-7617d2b81676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226649 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4pc\" (UniqueName: \"kubernetes.io/projected/754234c1-cad7-452b-b7af-be15353682c9-kube-api-access-vj4pc\") pod \"packageserver-d55dfcdfc-4kxk8\" (UID: \"754234c1-cad7-452b-b7af-be15353682c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226684 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/284d18dc-91eb-4c28-937a-8f7a03e32af0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4n6d5\" (UID: \"284d18dc-91eb-4c28-937a-8f7a03e32af0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226708 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/00ecd959-d344-450d-91de-06136bac3d80-machine-approver-tls\") pod \"machine-approver-56656f9798-p9xxv\" (UID: \"00ecd959-d344-450d-91de-06136bac3d80\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226723 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cf220ddc-cabc-43db-8281-d9304d65c625-images\") pod \"machine-config-operator-74547568cd-5djwl\" (UID: \"cf220ddc-cabc-43db-8281-d9304d65c625\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226762 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/79ca89d9-d18a-4927-9c58-47754973b8ed-etcd-client\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226780 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmknd\" (UniqueName: \"kubernetes.io/projected/1b992fb6-c183-4a39-9438-9ae970028bbf-kube-api-access-cmknd\") pod \"openshift-apiserver-operator-796bbdcf4f-gnvtm\" (UID: \"1b992fb6-c183-4a39-9438-9ae970028bbf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226795 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjvb2\" (UniqueName: \"kubernetes.io/projected/ebdc3669-daa5-4220-9042-265024c56738-kube-api-access-fjvb2\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226811 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q49d2\" (UniqueName: \"kubernetes.io/projected/6735e099-a06c-4b53-8c17-c3f644d7ba91-kube-api-access-q49d2\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226844 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/703d9af4-44eb-40f1-a27f-87668bec5700-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226886 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226900 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226915 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-stats-auth\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226944 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7757\" (UniqueName: \"kubernetes.io/projected/2f1e7a69-3cac-4d41-9fa2-72f14d7171be-kube-api-access-n7757\") pod \"ingress-canary-b248r\" (UID: \"2f1e7a69-3cac-4d41-9fa2-72f14d7171be\") " pod="openshift-ingress-canary/ingress-canary-b248r" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226971 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284d18dc-91eb-4c28-937a-8f7a03e32af0-config\") pod \"kube-controller-manager-operator-78b949d7b-4n6d5\" (UID: \"284d18dc-91eb-4c28-937a-8f7a03e32af0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.226985 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8792fd68-7e83-485d-af18-3d521ab37cbd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4npdz\" (UID: \"8792fd68-7e83-485d-af18-3d521ab37cbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227001 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/79ca89d9-d18a-4927-9c58-47754973b8ed-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227014 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/703d9af4-44eb-40f1-a27f-87668bec5700-serving-cert\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227051 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/299bc1da-cbd5-4574-8811-8fa2cf39529d-signing-key\") pod \"service-ca-9c57cc56f-26zgh\" (UID: \"299bc1da-cbd5-4574-8811-8fa2cf39529d\") " pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227066 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8200abb3-4189-4dae-b0d3-9f09c330e278-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2h8cx\" (UID: \"8200abb3-4189-4dae-b0d3-9f09c330e278\") " pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227080 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66b7a3ae-811e-43ea-8d7b-33793e9327b9-serving-cert\") pod \"service-ca-operator-777779d784-hbqb4\" (UID: \"66b7a3ae-811e-43ea-8d7b-33793e9327b9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227093 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8flfl\" (UniqueName: \"kubernetes.io/projected/66b7a3ae-811e-43ea-8d7b-33793e9327b9-kube-api-access-8flfl\") pod \"service-ca-operator-777779d784-hbqb4\" (UID: \"66b7a3ae-811e-43ea-8d7b-33793e9327b9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227125 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79j8f\" (UniqueName: \"kubernetes.io/projected/299bc1da-cbd5-4574-8811-8fa2cf39529d-kube-api-access-79j8f\") pod \"service-ca-9c57cc56f-26zgh\" (UID: \"299bc1da-cbd5-4574-8811-8fa2cf39529d\") " pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227142 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghlmp\" (UniqueName: \"kubernetes.io/projected/340a9fad-eda3-46b1-a1d2-64231fb78d62-kube-api-access-ghlmp\") pod \"package-server-manager-789f6589d5-qgcvz\" (UID: \"340a9fad-eda3-46b1-a1d2-64231fb78d62\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227182 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgtzw\" (UniqueName: \"kubernetes.io/projected/c6901f70-de25-46df-a04b-7e1dcb979454-kube-api-access-tgtzw\") pod \"machine-config-controller-84d6567774-fv75f\" (UID: \"c6901f70-de25-46df-a04b-7e1dcb979454\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227198 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-registration-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227213 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48ceac98-86e6-40c5-842f-775af04e420a-config-volume\") pod \"dns-default-gp4k7\" (UID: \"48ceac98-86e6-40c5-842f-775af04e420a\") " pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227243 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx8tf\" (UniqueName: \"kubernetes.io/projected/48ceac98-86e6-40c5-842f-775af04e420a-kube-api-access-wx8tf\") pod \"dns-default-gp4k7\" (UID: \"48ceac98-86e6-40c5-842f-775af04e420a\") " pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227257 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvmgz\" (UniqueName: \"kubernetes.io/projected/2452c3b9-85cf-4e8e-a20f-3adf5fb602c5-kube-api-access-cvmgz\") pod \"machine-config-server-ng9pj\" (UID: \"2452c3b9-85cf-4e8e-a20f-3adf5fb602c5\") " pod="openshift-machine-config-operator/machine-config-server-ng9pj" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227272 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-default-certificate\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227296 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-bound-sa-token\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227311 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzxs6\" (UniqueName: \"kubernetes.io/projected/00ecd959-d344-450d-91de-06136bac3d80-kube-api-access-bzxs6\") pod \"machine-approver-56656f9798-p9xxv\" (UID: \"00ecd959-d344-450d-91de-06136bac3d80\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227325 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66b7a3ae-811e-43ea-8d7b-33793e9327b9-config\") pod \"service-ca-operator-777779d784-hbqb4\" (UID: \"66b7a3ae-811e-43ea-8d7b-33793e9327b9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227338 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ff92469-ca47-4359-b56a-8df7332739ab-config-volume\") pod \"collect-profiles-29400885-b4rtr\" (UID: \"9ff92469-ca47-4359-b56a-8df7332739ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227352 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ebdc3669-daa5-4220-9042-265024c56738-etcd-ca\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227387 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b992fb6-c183-4a39-9438-9ae970028bbf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gnvtm\" (UID: \"1b992fb6-c183-4a39-9438-9ae970028bbf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227407 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ec6261f9-cc3f-4940-9144-7617d2b81676-srv-cert\") pod \"catalog-operator-68c6474976-689dm\" (UID: \"ec6261f9-cc3f-4940-9144-7617d2b81676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227422 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/754234c1-cad7-452b-b7af-be15353682c9-apiservice-cert\") pod \"packageserver-d55dfcdfc-4kxk8\" (UID: \"754234c1-cad7-452b-b7af-be15353682c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227459 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/79f103eb-d897-4500-9dd0-995bc41bde7c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lmqb9\" (UID: \"79f103eb-d897-4500-9dd0-995bc41bde7c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227476 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284d18dc-91eb-4c28-937a-8f7a03e32af0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4n6d5\" (UID: \"284d18dc-91eb-4c28-937a-8f7a03e32af0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227491 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ff92469-ca47-4359-b56a-8df7332739ab-secret-volume\") pod \"collect-profiles-29400885-b4rtr\" (UID: \"9ff92469-ca47-4359-b56a-8df7332739ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227505 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/299bc1da-cbd5-4574-8811-8fa2cf39529d-signing-cabundle\") pod \"service-ca-9c57cc56f-26zgh\" (UID: \"299bc1da-cbd5-4574-8811-8fa2cf39529d\") " pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227542 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00ecd959-d344-450d-91de-06136bac3d80-auth-proxy-config\") pod \"machine-approver-56656f9798-p9xxv\" (UID: \"00ecd959-d344-450d-91de-06136bac3d80\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227557 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8792fd68-7e83-485d-af18-3d521ab37cbd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4npdz\" (UID: \"8792fd68-7e83-485d-af18-3d521ab37cbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227573 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rll2z\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-kube-api-access-rll2z\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227590 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw5cg\" (UniqueName: \"kubernetes.io/projected/8200abb3-4189-4dae-b0d3-9f09c330e278-kube-api-access-hw5cg\") pod \"marketplace-operator-79b997595-2h8cx\" (UID: \"8200abb3-4189-4dae-b0d3-9f09c330e278\") " pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227616 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8200abb3-4189-4dae-b0d3-9f09c330e278-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2h8cx\" (UID: \"8200abb3-4189-4dae-b0d3-9f09c330e278\") " pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227650 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/79ca89d9-d18a-4927-9c58-47754973b8ed-encryption-config\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227666 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79ca89d9-d18a-4927-9c58-47754973b8ed-serving-cert\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227700 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2452c3b9-85cf-4e8e-a20f-3adf5fb602c5-node-bootstrap-token\") pod \"machine-config-server-ng9pj\" (UID: \"2452c3b9-85cf-4e8e-a20f-3adf5fb602c5\") " pod="openshift-machine-config-operator/machine-config-server-ng9pj" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227714 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-service-ca-bundle\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227737 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/340a9fad-eda3-46b1-a1d2-64231fb78d62-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qgcvz\" (UID: \"340a9fad-eda3-46b1-a1d2-64231fb78d62\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227752 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z72rs\" (UniqueName: \"kubernetes.io/projected/9ff92469-ca47-4359-b56a-8df7332739ab-kube-api-access-z72rs\") pod \"collect-profiles-29400885-b4rtr\" (UID: \"9ff92469-ca47-4359-b56a-8df7332739ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227767 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebdc3669-daa5-4220-9042-265024c56738-config\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227808 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq9cc\" (UniqueName: \"kubernetes.io/projected/79f103eb-d897-4500-9dd0-995bc41bde7c-kube-api-access-lq9cc\") pod \"control-plane-machine-set-operator-78cbb6b69f-lmqb9\" (UID: \"79f103eb-d897-4500-9dd0-995bc41bde7c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227825 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf220ddc-cabc-43db-8281-d9304d65c625-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5djwl\" (UID: \"cf220ddc-cabc-43db-8281-d9304d65c625\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227838 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/754234c1-cad7-452b-b7af-be15353682c9-webhook-cert\") pod \"packageserver-d55dfcdfc-4kxk8\" (UID: \"754234c1-cad7-452b-b7af-be15353682c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227851 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdch\" (UniqueName: \"kubernetes.io/projected/d8890596-b9fd-4710-9293-687c209c6090-kube-api-access-wzdch\") pod \"olm-operator-6b444d44fb-djrs9\" (UID: \"d8890596-b9fd-4710-9293-687c209c6090\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227877 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-registry-certificates\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227903 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227918 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebdc3669-daa5-4220-9042-265024c56738-serving-cert\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227947 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7cht\" (UniqueName: \"kubernetes.io/projected/79ca89d9-d18a-4927-9c58-47754973b8ed-kube-api-access-k7cht\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227960 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-plugins-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227974 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gv5j\" (UniqueName: \"kubernetes.io/projected/1e2cfd46-a0a5-4138-9093-b4bd411c6390-kube-api-access-4gv5j\") pod \"migrator-59844c95c7-58h2l\" (UID: \"1e2cfd46-a0a5-4138-9093-b4bd411c6390\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-58h2l" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.227999 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkfxk\" (UniqueName: \"kubernetes.io/projected/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-kube-api-access-mkfxk\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.228014 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfnrf\" (UniqueName: \"kubernetes.io/projected/cf220ddc-cabc-43db-8281-d9304d65c625-kube-api-access-pfnrf\") pod \"machine-config-operator-74547568cd-5djwl\" (UID: \"cf220ddc-cabc-43db-8281-d9304d65c625\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.228028 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-csi-data-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.228046 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/79ca89d9-d18a-4927-9c58-47754973b8ed-audit-dir\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.228061 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-mountpoint-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.228075 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8792fd68-7e83-485d-af18-3d521ab37cbd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4npdz\" (UID: \"8792fd68-7e83-485d-af18-3d521ab37cbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.228101 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79ca89d9-d18a-4927-9c58-47754973b8ed-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.228115 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/754234c1-cad7-452b-b7af-be15353682c9-tmpfs\") pod \"packageserver-d55dfcdfc-4kxk8\" (UID: \"754234c1-cad7-452b-b7af-be15353682c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.228142 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703d9af4-44eb-40f1-a27f-87668bec5700-config\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.228155 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-config\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.228245 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c097a8f-db6e-4f47-b014-1c9c75a92ad8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8j88\" (UID: \"4c097a8f-db6e-4f47-b014-1c9c75a92ad8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.228286 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/703d9af4-44eb-40f1-a27f-87668bec5700-service-ca-bundle\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.228446 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b992fb6-c183-4a39-9438-9ae970028bbf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gnvtm\" (UID: \"1b992fb6-c183-4a39-9438-9ae970028bbf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.230581 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/703d9af4-44eb-40f1-a27f-87668bec5700-service-ca-bundle\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.232116 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.232336 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/79ca89d9-d18a-4927-9c58-47754973b8ed-audit-policies\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.232790 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00ecd959-d344-450d-91de-06136bac3d80-config\") pod \"machine-approver-56656f9798-p9xxv\" (UID: \"00ecd959-d344-450d-91de-06136bac3d80\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.233151 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef330858-933c-41ce-b34b-db48cd8e8200-serving-cert\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.233383 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-client-ca\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.234335 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/79ca89d9-d18a-4927-9c58-47754973b8ed-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.236155 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-registry-tls\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.237010 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/00ecd959-d344-450d-91de-06136bac3d80-machine-approver-tls\") pod \"machine-approver-56656f9798-p9xxv\" (UID: \"00ecd959-d344-450d-91de-06136bac3d80\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.237530 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/703d9af4-44eb-40f1-a27f-87668bec5700-serving-cert\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.238399 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-trusted-ca\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.238603 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/703d9af4-44eb-40f1-a27f-87668bec5700-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.238846 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/79ca89d9-d18a-4927-9c58-47754973b8ed-audit-dir\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.239524 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/79ca89d9-d18a-4927-9c58-47754973b8ed-encryption-config\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: E1125 06:49:21.240322 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:21.740309534 +0000 UTC m=+136.228540792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.240558 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-config\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.241039 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703d9af4-44eb-40f1-a27f-87668bec5700-config\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.241221 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79ca89d9-d18a-4927-9c58-47754973b8ed-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.241435 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00ecd959-d344-450d-91de-06136bac3d80-auth-proxy-config\") pod \"machine-approver-56656f9798-p9xxv\" (UID: \"00ecd959-d344-450d-91de-06136bac3d80\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.241470 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b992fb6-c183-4a39-9438-9ae970028bbf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gnvtm\" (UID: \"1b992fb6-c183-4a39-9438-9ae970028bbf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.243527 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/79ca89d9-d18a-4927-9c58-47754973b8ed-etcd-client\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.243836 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.245532 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.246017 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79ca89d9-d18a-4927-9c58-47754973b8ed-serving-cert\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.247032 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-registry-certificates\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.264718 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-557lf\" (UniqueName: \"kubernetes.io/projected/703d9af4-44eb-40f1-a27f-87668bec5700-kube-api-access-557lf\") pod \"authentication-operator-69f744f599-zhw8w\" (UID: \"703d9af4-44eb-40f1-a27f-87668bec5700\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.269437 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.269462 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.272377 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.287897 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9xf\" (UniqueName: \"kubernetes.io/projected/ef330858-933c-41ce-b34b-db48cd8e8200-kube-api-access-bj9xf\") pod \"controller-manager-879f6c89f-shnd8\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.299778 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.311711 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7cht\" (UniqueName: \"kubernetes.io/projected/79ca89d9-d18a-4927-9c58-47754973b8ed-kube-api-access-k7cht\") pod \"apiserver-7bbb656c7d-5g2wl\" (UID: \"79ca89d9-d18a-4927-9c58-47754973b8ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.318906 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.330552 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:21 crc kubenswrapper[4482]: E1125 06:49:21.333484 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:21.833448871 +0000 UTC m=+136.321680130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.334788 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8792fd68-7e83-485d-af18-3d521ab37cbd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4npdz\" (UID: \"8792fd68-7e83-485d-af18-3d521ab37cbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.334846 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284d18dc-91eb-4c28-937a-8f7a03e32af0-config\") pod \"kube-controller-manager-operator-78b949d7b-4n6d5\" (UID: \"284d18dc-91eb-4c28-937a-8f7a03e32af0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.334892 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/299bc1da-cbd5-4574-8811-8fa2cf39529d-signing-key\") pod \"service-ca-9c57cc56f-26zgh\" (UID: \"299bc1da-cbd5-4574-8811-8fa2cf39529d\") " pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.334935 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8200abb3-4189-4dae-b0d3-9f09c330e278-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2h8cx\" (UID: \"8200abb3-4189-4dae-b0d3-9f09c330e278\") " pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.334961 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66b7a3ae-811e-43ea-8d7b-33793e9327b9-serving-cert\") pod \"service-ca-operator-777779d784-hbqb4\" (UID: \"66b7a3ae-811e-43ea-8d7b-33793e9327b9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.334989 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8flfl\" (UniqueName: \"kubernetes.io/projected/66b7a3ae-811e-43ea-8d7b-33793e9327b9-kube-api-access-8flfl\") pod \"service-ca-operator-777779d784-hbqb4\" (UID: \"66b7a3ae-811e-43ea-8d7b-33793e9327b9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335030 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79j8f\" (UniqueName: \"kubernetes.io/projected/299bc1da-cbd5-4574-8811-8fa2cf39529d-kube-api-access-79j8f\") pod \"service-ca-9c57cc56f-26zgh\" (UID: \"299bc1da-cbd5-4574-8811-8fa2cf39529d\") " pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335055 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghlmp\" (UniqueName: \"kubernetes.io/projected/340a9fad-eda3-46b1-a1d2-64231fb78d62-kube-api-access-ghlmp\") pod \"package-server-manager-789f6589d5-qgcvz\" (UID: \"340a9fad-eda3-46b1-a1d2-64231fb78d62\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335084 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgtzw\" (UniqueName: \"kubernetes.io/projected/c6901f70-de25-46df-a04b-7e1dcb979454-kube-api-access-tgtzw\") pod \"machine-config-controller-84d6567774-fv75f\" (UID: \"c6901f70-de25-46df-a04b-7e1dcb979454\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335108 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-registration-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335137 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48ceac98-86e6-40c5-842f-775af04e420a-config-volume\") pod \"dns-default-gp4k7\" (UID: \"48ceac98-86e6-40c5-842f-775af04e420a\") " pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335184 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvmgz\" (UniqueName: \"kubernetes.io/projected/2452c3b9-85cf-4e8e-a20f-3adf5fb602c5-kube-api-access-cvmgz\") pod \"machine-config-server-ng9pj\" (UID: \"2452c3b9-85cf-4e8e-a20f-3adf5fb602c5\") " pod="openshift-machine-config-operator/machine-config-server-ng9pj" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335230 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx8tf\" (UniqueName: \"kubernetes.io/projected/48ceac98-86e6-40c5-842f-775af04e420a-kube-api-access-wx8tf\") pod \"dns-default-gp4k7\" (UID: \"48ceac98-86e6-40c5-842f-775af04e420a\") " pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335259 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-default-certificate\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335305 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66b7a3ae-811e-43ea-8d7b-33793e9327b9-config\") pod \"service-ca-operator-777779d784-hbqb4\" (UID: \"66b7a3ae-811e-43ea-8d7b-33793e9327b9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335329 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ff92469-ca47-4359-b56a-8df7332739ab-config-volume\") pod \"collect-profiles-29400885-b4rtr\" (UID: \"9ff92469-ca47-4359-b56a-8df7332739ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335362 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ebdc3669-daa5-4220-9042-265024c56738-etcd-ca\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335389 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ec6261f9-cc3f-4940-9144-7617d2b81676-srv-cert\") pod \"catalog-operator-68c6474976-689dm\" (UID: \"ec6261f9-cc3f-4940-9144-7617d2b81676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335418 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/754234c1-cad7-452b-b7af-be15353682c9-apiservice-cert\") pod \"packageserver-d55dfcdfc-4kxk8\" (UID: \"754234c1-cad7-452b-b7af-be15353682c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335466 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/79f103eb-d897-4500-9dd0-995bc41bde7c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lmqb9\" (UID: \"79f103eb-d897-4500-9dd0-995bc41bde7c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335500 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284d18dc-91eb-4c28-937a-8f7a03e32af0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4n6d5\" (UID: \"284d18dc-91eb-4c28-937a-8f7a03e32af0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335531 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ff92469-ca47-4359-b56a-8df7332739ab-secret-volume\") pod \"collect-profiles-29400885-b4rtr\" (UID: \"9ff92469-ca47-4359-b56a-8df7332739ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335553 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/299bc1da-cbd5-4574-8811-8fa2cf39529d-signing-cabundle\") pod \"service-ca-9c57cc56f-26zgh\" (UID: \"299bc1da-cbd5-4574-8811-8fa2cf39529d\") " pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335586 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8792fd68-7e83-485d-af18-3d521ab37cbd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4npdz\" (UID: \"8792fd68-7e83-485d-af18-3d521ab37cbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335627 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw5cg\" (UniqueName: \"kubernetes.io/projected/8200abb3-4189-4dae-b0d3-9f09c330e278-kube-api-access-hw5cg\") pod \"marketplace-operator-79b997595-2h8cx\" (UID: \"8200abb3-4189-4dae-b0d3-9f09c330e278\") " pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335658 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8200abb3-4189-4dae-b0d3-9f09c330e278-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2h8cx\" (UID: \"8200abb3-4189-4dae-b0d3-9f09c330e278\") " pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335687 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-registration-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335702 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2452c3b9-85cf-4e8e-a20f-3adf5fb602c5-node-bootstrap-token\") pod \"machine-config-server-ng9pj\" (UID: \"2452c3b9-85cf-4e8e-a20f-3adf5fb602c5\") " pod="openshift-machine-config-operator/machine-config-server-ng9pj" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335779 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-service-ca-bundle\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335811 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/340a9fad-eda3-46b1-a1d2-64231fb78d62-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qgcvz\" (UID: \"340a9fad-eda3-46b1-a1d2-64231fb78d62\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335845 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z72rs\" (UniqueName: \"kubernetes.io/projected/9ff92469-ca47-4359-b56a-8df7332739ab-kube-api-access-z72rs\") pod \"collect-profiles-29400885-b4rtr\" (UID: \"9ff92469-ca47-4359-b56a-8df7332739ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335872 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebdc3669-daa5-4220-9042-265024c56738-config\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335894 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq9cc\" (UniqueName: \"kubernetes.io/projected/79f103eb-d897-4500-9dd0-995bc41bde7c-kube-api-access-lq9cc\") pod \"control-plane-machine-set-operator-78cbb6b69f-lmqb9\" (UID: \"79f103eb-d897-4500-9dd0-995bc41bde7c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335919 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf220ddc-cabc-43db-8281-d9304d65c625-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5djwl\" (UID: \"cf220ddc-cabc-43db-8281-d9304d65c625\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335957 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/754234c1-cad7-452b-b7af-be15353682c9-webhook-cert\") pod \"packageserver-d55dfcdfc-4kxk8\" (UID: \"754234c1-cad7-452b-b7af-be15353682c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.335982 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzdch\" (UniqueName: \"kubernetes.io/projected/d8890596-b9fd-4710-9293-687c209c6090-kube-api-access-wzdch\") pod \"olm-operator-6b444d44fb-djrs9\" (UID: \"d8890596-b9fd-4710-9293-687c209c6090\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336014 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebdc3669-daa5-4220-9042-265024c56738-serving-cert\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336054 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gv5j\" (UniqueName: \"kubernetes.io/projected/1e2cfd46-a0a5-4138-9093-b4bd411c6390-kube-api-access-4gv5j\") pod \"migrator-59844c95c7-58h2l\" (UID: \"1e2cfd46-a0a5-4138-9093-b4bd411c6390\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-58h2l" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336083 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-plugins-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336102 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkfxk\" (UniqueName: \"kubernetes.io/projected/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-kube-api-access-mkfxk\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336099 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66b7a3ae-811e-43ea-8d7b-33793e9327b9-config\") pod \"service-ca-operator-777779d784-hbqb4\" (UID: \"66b7a3ae-811e-43ea-8d7b-33793e9327b9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336123 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfnrf\" (UniqueName: \"kubernetes.io/projected/cf220ddc-cabc-43db-8281-d9304d65c625-kube-api-access-pfnrf\") pod \"machine-config-operator-74547568cd-5djwl\" (UID: \"cf220ddc-cabc-43db-8281-d9304d65c625\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336144 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-csi-data-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336185 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-mountpoint-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336209 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8792fd68-7e83-485d-af18-3d521ab37cbd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4npdz\" (UID: \"8792fd68-7e83-485d-af18-3d521ab37cbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336234 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/754234c1-cad7-452b-b7af-be15353682c9-tmpfs\") pod \"packageserver-d55dfcdfc-4kxk8\" (UID: \"754234c1-cad7-452b-b7af-be15353682c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336264 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c097a8f-db6e-4f47-b014-1c9c75a92ad8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8j88\" (UID: \"4c097a8f-db6e-4f47-b014-1c9c75a92ad8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336294 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d8890596-b9fd-4710-9293-687c209c6090-srv-cert\") pod \"olm-operator-6b444d44fb-djrs9\" (UID: \"d8890596-b9fd-4710-9293-687c209c6090\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336318 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c6901f70-de25-46df-a04b-7e1dcb979454-proxy-tls\") pod \"machine-config-controller-84d6567774-fv75f\" (UID: \"c6901f70-de25-46df-a04b-7e1dcb979454\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336336 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ebdc3669-daa5-4220-9042-265024c56738-etcd-service-ca\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336358 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2b7e856-0bf2-44b9-868c-8181204573c4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-v9rqm\" (UID: \"e2b7e856-0bf2-44b9-868c-8181204573c4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v9rqm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336381 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d8890596-b9fd-4710-9293-687c209c6090-profile-collector-cert\") pod \"olm-operator-6b444d44fb-djrs9\" (UID: \"d8890596-b9fd-4710-9293-687c209c6090\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336401 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2452c3b9-85cf-4e8e-a20f-3adf5fb602c5-certs\") pod \"machine-config-server-ng9pj\" (UID: \"2452c3b9-85cf-4e8e-a20f-3adf5fb602c5\") " pod="openshift-machine-config-operator/machine-config-server-ng9pj" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336423 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf220ddc-cabc-43db-8281-d9304d65c625-proxy-tls\") pod \"machine-config-operator-74547568cd-5djwl\" (UID: \"cf220ddc-cabc-43db-8281-d9304d65c625\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336447 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nn8w\" (UniqueName: \"kubernetes.io/projected/e2b7e856-0bf2-44b9-868c-8181204573c4-kube-api-access-9nn8w\") pod \"multus-admission-controller-857f4d67dd-v9rqm\" (UID: \"e2b7e856-0bf2-44b9-868c-8181204573c4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v9rqm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336468 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f1e7a69-3cac-4d41-9fa2-72f14d7171be-cert\") pod \"ingress-canary-b248r\" (UID: \"2f1e7a69-3cac-4d41-9fa2-72f14d7171be\") " pod="openshift-ingress-canary/ingress-canary-b248r" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336491 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48ceac98-86e6-40c5-842f-775af04e420a-metrics-tls\") pod \"dns-default-gp4k7\" (UID: \"48ceac98-86e6-40c5-842f-775af04e420a\") " pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336524 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-metrics-certs\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336551 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c097a8f-db6e-4f47-b014-1c9c75a92ad8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8j88\" (UID: \"4c097a8f-db6e-4f47-b014-1c9c75a92ad8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336574 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj6xg\" (UniqueName: \"kubernetes.io/projected/ec6261f9-cc3f-4940-9144-7617d2b81676-kube-api-access-pj6xg\") pod \"catalog-operator-68c6474976-689dm\" (UID: \"ec6261f9-cc3f-4940-9144-7617d2b81676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336593 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdq4x\" (UniqueName: \"kubernetes.io/projected/4c097a8f-db6e-4f47-b014-1c9c75a92ad8-kube-api-access-vdq4x\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8j88\" (UID: \"4c097a8f-db6e-4f47-b014-1c9c75a92ad8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336618 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6901f70-de25-46df-a04b-7e1dcb979454-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fv75f\" (UID: \"c6901f70-de25-46df-a04b-7e1dcb979454\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336641 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-socket-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336660 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebdc3669-daa5-4220-9042-265024c56738-etcd-client\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336684 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ec6261f9-cc3f-4940-9144-7617d2b81676-profile-collector-cert\") pod \"catalog-operator-68c6474976-689dm\" (UID: \"ec6261f9-cc3f-4940-9144-7617d2b81676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336701 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj4pc\" (UniqueName: \"kubernetes.io/projected/754234c1-cad7-452b-b7af-be15353682c9-kube-api-access-vj4pc\") pod \"packageserver-d55dfcdfc-4kxk8\" (UID: \"754234c1-cad7-452b-b7af-be15353682c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336722 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/284d18dc-91eb-4c28-937a-8f7a03e32af0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4n6d5\" (UID: \"284d18dc-91eb-4c28-937a-8f7a03e32af0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336745 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cf220ddc-cabc-43db-8281-d9304d65c625-images\") pod \"machine-config-operator-74547568cd-5djwl\" (UID: \"cf220ddc-cabc-43db-8281-d9304d65c625\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336772 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjvb2\" (UniqueName: \"kubernetes.io/projected/ebdc3669-daa5-4220-9042-265024c56738-kube-api-access-fjvb2\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336782 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48ceac98-86e6-40c5-842f-775af04e420a-config-volume\") pod \"dns-default-gp4k7\" (UID: \"48ceac98-86e6-40c5-842f-775af04e420a\") " pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336800 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q49d2\" (UniqueName: \"kubernetes.io/projected/6735e099-a06c-4b53-8c17-c3f644d7ba91-kube-api-access-q49d2\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336833 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336857 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-stats-auth\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.336889 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7757\" (UniqueName: \"kubernetes.io/projected/2f1e7a69-3cac-4d41-9fa2-72f14d7171be-kube-api-access-n7757\") pod \"ingress-canary-b248r\" (UID: \"2f1e7a69-3cac-4d41-9fa2-72f14d7171be\") " pod="openshift-ingress-canary/ingress-canary-b248r" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.338880 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8792fd68-7e83-485d-af18-3d521ab37cbd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4npdz\" (UID: \"8792fd68-7e83-485d-af18-3d521ab37cbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.339407 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284d18dc-91eb-4c28-937a-8f7a03e32af0-config\") pod \"kube-controller-manager-operator-78b949d7b-4n6d5\" (UID: \"284d18dc-91eb-4c28-937a-8f7a03e32af0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.339945 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzxs6\" (UniqueName: \"kubernetes.io/projected/00ecd959-d344-450d-91de-06136bac3d80-kube-api-access-bzxs6\") pod \"machine-approver-56656f9798-p9xxv\" (UID: \"00ecd959-d344-450d-91de-06136bac3d80\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.340395 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-default-certificate\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.345220 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ebdc3669-daa5-4220-9042-265024c56738-etcd-ca\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.346512 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/299bc1da-cbd5-4574-8811-8fa2cf39529d-signing-cabundle\") pod \"service-ca-9c57cc56f-26zgh\" (UID: \"299bc1da-cbd5-4574-8811-8fa2cf39529d\") " pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.347199 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ff92469-ca47-4359-b56a-8df7332739ab-config-volume\") pod \"collect-profiles-29400885-b4rtr\" (UID: \"9ff92469-ca47-4359-b56a-8df7332739ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.349029 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/754234c1-cad7-452b-b7af-be15353682c9-apiservice-cert\") pod \"packageserver-d55dfcdfc-4kxk8\" (UID: \"754234c1-cad7-452b-b7af-be15353682c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.350646 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-socket-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.354734 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ebdc3669-daa5-4220-9042-265024c56738-etcd-service-ca\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: E1125 06:49:21.359211 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:21.859141081 +0000 UTC m=+136.347372341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.360120 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6901f70-de25-46df-a04b-7e1dcb979454-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fv75f\" (UID: \"c6901f70-de25-46df-a04b-7e1dcb979454\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.361983 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cf220ddc-cabc-43db-8281-d9304d65c625-images\") pod \"machine-config-operator-74547568cd-5djwl\" (UID: \"cf220ddc-cabc-43db-8281-d9304d65c625\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.367857 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d8890596-b9fd-4710-9293-687c209c6090-profile-collector-cert\") pod \"olm-operator-6b444d44fb-djrs9\" (UID: \"d8890596-b9fd-4710-9293-687c209c6090\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.368547 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2452c3b9-85cf-4e8e-a20f-3adf5fb602c5-certs\") pod \"machine-config-server-ng9pj\" (UID: \"2452c3b9-85cf-4e8e-a20f-3adf5fb602c5\") " pod="openshift-machine-config-operator/machine-config-server-ng9pj" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.375596 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8200abb3-4189-4dae-b0d3-9f09c330e278-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2h8cx\" (UID: \"8200abb3-4189-4dae-b0d3-9f09c330e278\") " pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.376587 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/299bc1da-cbd5-4574-8811-8fa2cf39529d-signing-key\") pod \"service-ca-9c57cc56f-26zgh\" (UID: \"299bc1da-cbd5-4574-8811-8fa2cf39529d\") " pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.378493 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/79f103eb-d897-4500-9dd0-995bc41bde7c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lmqb9\" (UID: \"79f103eb-d897-4500-9dd0-995bc41bde7c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.378699 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ec6261f9-cc3f-4940-9144-7617d2b81676-srv-cert\") pod \"catalog-operator-68c6474976-689dm\" (UID: \"ec6261f9-cc3f-4940-9144-7617d2b81676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.380838 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8200abb3-4189-4dae-b0d3-9f09c330e278-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2h8cx\" (UID: \"8200abb3-4189-4dae-b0d3-9f09c330e278\") " pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.382394 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ec6261f9-cc3f-4940-9144-7617d2b81676-profile-collector-cert\") pod \"catalog-operator-68c6474976-689dm\" (UID: \"ec6261f9-cc3f-4940-9144-7617d2b81676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.382703 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-bound-sa-token\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.382922 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2452c3b9-85cf-4e8e-a20f-3adf5fb602c5-node-bootstrap-token\") pod \"machine-config-server-ng9pj\" (UID: \"2452c3b9-85cf-4e8e-a20f-3adf5fb602c5\") " pod="openshift-machine-config-operator/machine-config-server-ng9pj" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.382994 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf220ddc-cabc-43db-8281-d9304d65c625-proxy-tls\") pod \"machine-config-operator-74547568cd-5djwl\" (UID: \"cf220ddc-cabc-43db-8281-d9304d65c625\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.383802 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ff92469-ca47-4359-b56a-8df7332739ab-secret-volume\") pod \"collect-profiles-29400885-b4rtr\" (UID: \"9ff92469-ca47-4359-b56a-8df7332739ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.386463 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.387994 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf220ddc-cabc-43db-8281-d9304d65c625-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5djwl\" (UID: \"cf220ddc-cabc-43db-8281-d9304d65c625\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.394332 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.394606 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-service-ca-bundle\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.395057 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2b7e856-0bf2-44b9-868c-8181204573c4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-v9rqm\" (UID: \"e2b7e856-0bf2-44b9-868c-8181204573c4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v9rqm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.395293 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebdc3669-daa5-4220-9042-265024c56738-etcd-client\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.395366 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-stats-auth\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.395438 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmknd\" (UniqueName: \"kubernetes.io/projected/1b992fb6-c183-4a39-9438-9ae970028bbf-kube-api-access-cmknd\") pod \"openshift-apiserver-operator-796bbdcf4f-gnvtm\" (UID: \"1b992fb6-c183-4a39-9438-9ae970028bbf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.395703 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66b7a3ae-811e-43ea-8d7b-33793e9327b9-serving-cert\") pod \"service-ca-operator-777779d784-hbqb4\" (UID: \"66b7a3ae-811e-43ea-8d7b-33793e9327b9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.396080 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebdc3669-daa5-4220-9042-265024c56738-config\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.396494 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284d18dc-91eb-4c28-937a-8f7a03e32af0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4n6d5\" (UID: \"284d18dc-91eb-4c28-937a-8f7a03e32af0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.397105 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-metrics-certs\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.400520 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c6901f70-de25-46df-a04b-7e1dcb979454-proxy-tls\") pod \"machine-config-controller-84d6567774-fv75f\" (UID: \"c6901f70-de25-46df-a04b-7e1dcb979454\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.400533 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d8890596-b9fd-4710-9293-687c209c6090-srv-cert\") pod \"olm-operator-6b444d44fb-djrs9\" (UID: \"d8890596-b9fd-4710-9293-687c209c6090\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.401196 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-mountpoint-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.401286 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c097a8f-db6e-4f47-b014-1c9c75a92ad8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8j88\" (UID: \"4c097a8f-db6e-4f47-b014-1c9c75a92ad8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.401331 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-plugins-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.401667 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/754234c1-cad7-452b-b7af-be15353682c9-tmpfs\") pod \"packageserver-d55dfcdfc-4kxk8\" (UID: \"754234c1-cad7-452b-b7af-be15353682c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.401807 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6735e099-a06c-4b53-8c17-c3f644d7ba91-csi-data-dir\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.402006 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48ceac98-86e6-40c5-842f-775af04e420a-metrics-tls\") pod \"dns-default-gp4k7\" (UID: \"48ceac98-86e6-40c5-842f-775af04e420a\") " pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.402759 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f1e7a69-3cac-4d41-9fa2-72f14d7171be-cert\") pod \"ingress-canary-b248r\" (UID: \"2f1e7a69-3cac-4d41-9fa2-72f14d7171be\") " pod="openshift-ingress-canary/ingress-canary-b248r" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.405669 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" event={"ID":"43f33231-2b25-4a54-87da-e93c8cf3ee18","Type":"ContainerStarted","Data":"59f1194725db55e662ca018a375ef3096924abafb1916b51afcac9f4abab8e78"} Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.412060 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rll2z\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-kube-api-access-rll2z\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.412060 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8792fd68-7e83-485d-af18-3d521ab37cbd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4npdz\" (UID: \"8792fd68-7e83-485d-af18-3d521ab37cbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.412244 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c097a8f-db6e-4f47-b014-1c9c75a92ad8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8j88\" (UID: \"4c097a8f-db6e-4f47-b014-1c9c75a92ad8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.412941 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebdc3669-daa5-4220-9042-265024c56738-serving-cert\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.413937 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/754234c1-cad7-452b-b7af-be15353682c9-webhook-cert\") pod \"packageserver-d55dfcdfc-4kxk8\" (UID: \"754234c1-cad7-452b-b7af-be15353682c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.415101 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/340a9fad-eda3-46b1-a1d2-64231fb78d62-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qgcvz\" (UID: \"340a9fad-eda3-46b1-a1d2-64231fb78d62\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.416138 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-78b9v" event={"ID":"13c2044e-5435-4487-be5b-fafa43b6db3a","Type":"ContainerStarted","Data":"ea726acf6d7be8650a0546bf5e5b2e3f8d5db1b508a89fbdc73707a06940edaf"} Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.416181 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-78b9v" event={"ID":"13c2044e-5435-4487-be5b-fafa43b6db3a","Type":"ContainerStarted","Data":"dd692766cdc2ceea08fd5215b3d49b19b6ed7d3cabc79378850ae107f853e0a2"} Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.416637 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-78b9v" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.428873 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvmgz\" (UniqueName: \"kubernetes.io/projected/2452c3b9-85cf-4e8e-a20f-3adf5fb602c5-kube-api-access-cvmgz\") pod \"machine-config-server-ng9pj\" (UID: \"2452c3b9-85cf-4e8e-a20f-3adf5fb602c5\") " pod="openshift-machine-config-operator/machine-config-server-ng9pj" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.430679 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" event={"ID":"587f32ef-b1da-4e40-a1bc-33ba39c207e8","Type":"ContainerStarted","Data":"a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6"} Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.430705 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.430715 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" event={"ID":"587f32ef-b1da-4e40-a1bc-33ba39c207e8","Type":"ContainerStarted","Data":"8d5c3f2b70beeae3d0a6c71c01ba202855c7c51a913cf8c882b07082b3fed232"} Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.432061 4482 patch_prober.go:28] interesting pod/downloads-7954f5f757-78b9v container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.432094 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78b9v" podUID="13c2044e-5435-4487-be5b-fafa43b6db3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.437024 4482 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qbn2w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.437055 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" podUID="587f32ef-b1da-4e40-a1bc-33ba39c207e8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.437462 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:21 crc kubenswrapper[4482]: E1125 06:49:21.437892 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:21.93787132 +0000 UTC m=+136.426102579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.439147 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.446348 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9ggws"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.449340 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.457546 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx8tf\" (UniqueName: \"kubernetes.io/projected/48ceac98-86e6-40c5-842f-775af04e420a-kube-api-access-wx8tf\") pod \"dns-default-gp4k7\" (UID: \"48ceac98-86e6-40c5-842f-775af04e420a\") " pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.460107 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.463423 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7757\" (UniqueName: \"kubernetes.io/projected/2f1e7a69-3cac-4d41-9fa2-72f14d7171be-kube-api-access-n7757\") pod \"ingress-canary-b248r\" (UID: \"2f1e7a69-3cac-4d41-9fa2-72f14d7171be\") " pod="openshift-ingress-canary/ingress-canary-b248r" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.465767 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.477248 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.498628 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw5cg\" (UniqueName: \"kubernetes.io/projected/8200abb3-4189-4dae-b0d3-9f09c330e278-kube-api-access-hw5cg\") pod \"marketplace-operator-79b997595-2h8cx\" (UID: \"8200abb3-4189-4dae-b0d3-9f09c330e278\") " pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.512885 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.515591 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n56kp"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.519981 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gqc49"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.532483 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdq4x\" (UniqueName: \"kubernetes.io/projected/4c097a8f-db6e-4f47-b014-1c9c75a92ad8-kube-api-access-vdq4x\") pod \"kube-storage-version-migrator-operator-b67b599dd-b8j88\" (UID: \"4c097a8f-db6e-4f47-b014-1c9c75a92ad8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.533123 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj6xg\" (UniqueName: \"kubernetes.io/projected/ec6261f9-cc3f-4940-9144-7617d2b81676-kube-api-access-pj6xg\") pod \"catalog-operator-68c6474976-689dm\" (UID: \"ec6261f9-cc3f-4940-9144-7617d2b81676\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.534550 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.539241 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.539657 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ng9pj" Nov 25 06:49:21 crc kubenswrapper[4482]: E1125 06:49:21.539939 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.03990847 +0000 UTC m=+136.528139728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.545013 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-b248r" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.545998 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj4pc\" (UniqueName: \"kubernetes.io/projected/754234c1-cad7-452b-b7af-be15353682c9-kube-api-access-vj4pc\") pod \"packageserver-d55dfcdfc-4kxk8\" (UID: \"754234c1-cad7-452b-b7af-be15353682c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.588003 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.596646 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/284d18dc-91eb-4c28-937a-8f7a03e32af0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4n6d5\" (UID: \"284d18dc-91eb-4c28-937a-8f7a03e32af0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.605023 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjvb2\" (UniqueName: \"kubernetes.io/projected/ebdc3669-daa5-4220-9042-265024c56738-kube-api-access-fjvb2\") pod \"etcd-operator-b45778765-5w6bs\" (UID: \"ebdc3669-daa5-4220-9042-265024c56738\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.616000 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q49d2\" (UniqueName: \"kubernetes.io/projected/6735e099-a06c-4b53-8c17-c3f644d7ba91-kube-api-access-q49d2\") pod \"csi-hostpathplugin-gvbtp\" (UID: \"6735e099-a06c-4b53-8c17-c3f644d7ba91\") " pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.638396 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nn8w\" (UniqueName: \"kubernetes.io/projected/e2b7e856-0bf2-44b9-868c-8181204573c4-kube-api-access-9nn8w\") pod \"multus-admission-controller-857f4d67dd-v9rqm\" (UID: \"e2b7e856-0bf2-44b9-868c-8181204573c4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v9rqm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.640310 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:21 crc kubenswrapper[4482]: E1125 06:49:21.640782 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.140749093 +0000 UTC m=+136.628980352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.658300 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghlmp\" (UniqueName: \"kubernetes.io/projected/340a9fad-eda3-46b1-a1d2-64231fb78d62-kube-api-access-ghlmp\") pod \"package-server-manager-789f6589d5-qgcvz\" (UID: \"340a9fad-eda3-46b1-a1d2-64231fb78d62\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.668659 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f8zk7"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.671645 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z72rs\" (UniqueName: \"kubernetes.io/projected/9ff92469-ca47-4359-b56a-8df7332739ab-kube-api-access-z72rs\") pod \"collect-profiles-29400885-b4rtr\" (UID: \"9ff92469-ca47-4359-b56a-8df7332739ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.684081 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-v9rqm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.696064 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgtzw\" (UniqueName: \"kubernetes.io/projected/c6901f70-de25-46df-a04b-7e1dcb979454-kube-api-access-tgtzw\") pod \"machine-config-controller-84d6567774-fv75f\" (UID: \"c6901f70-de25-46df-a04b-7e1dcb979454\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.703560 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9tqlb"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.710667 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq9cc\" (UniqueName: \"kubernetes.io/projected/79f103eb-d897-4500-9dd0-995bc41bde7c-kube-api-access-lq9cc\") pod \"control-plane-machine-set-operator-78cbb6b69f-lmqb9\" (UID: \"79f103eb-d897-4500-9dd0-995bc41bde7c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.722885 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzdch\" (UniqueName: \"kubernetes.io/projected/d8890596-b9fd-4710-9293-687c209c6090-kube-api-access-wzdch\") pod \"olm-operator-6b444d44fb-djrs9\" (UID: \"d8890596-b9fd-4710-9293-687c209c6090\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:21 crc kubenswrapper[4482]: W1125 06:49:21.723940 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2452c3b9_85cf_4e8e_a20f_3adf5fb602c5.slice/crio-3d7314d89f583197c720caa8c8be714df3156f751b5f4f8046c2dedef6d3f221 WatchSource:0}: Error finding container 3d7314d89f583197c720caa8c8be714df3156f751b5f4f8046c2dedef6d3f221: Status 404 returned error can't find the container with id 3d7314d89f583197c720caa8c8be714df3156f751b5f4f8046c2dedef6d3f221 Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.727198 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.734002 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:21 crc kubenswrapper[4482]: W1125 06:49:21.736717 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod861a93e1_ffca_40f2_ada4_2f736f05ba1c.slice/crio-ff3132ad12f6a55bd3384442126c0fe2230f21228c651bfd73d11623c5447f95 WatchSource:0}: Error finding container ff3132ad12f6a55bd3384442126c0fe2230f21228c651bfd73d11623c5447f95: Status 404 returned error can't find the container with id ff3132ad12f6a55bd3384442126c0fe2230f21228c651bfd73d11623c5447f95 Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.741689 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: E1125 06:49:21.741995 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.241984021 +0000 UTC m=+136.730215280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.745782 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.750681 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.751076 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gv5j\" (UniqueName: \"kubernetes.io/projected/1e2cfd46-a0a5-4138-9093-b4bd411c6390-kube-api-access-4gv5j\") pod \"migrator-59844c95c7-58h2l\" (UID: \"1e2cfd46-a0a5-4138-9093-b4bd411c6390\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-58h2l" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.755592 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.761492 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.773540 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-58h2l" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.802664 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.803408 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8792fd68-7e83-485d-af18-3d521ab37cbd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4npdz\" (UID: \"8792fd68-7e83-485d-af18-3d521ab37cbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.804515 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-shnd8"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.805911 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.808470 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkfxk\" (UniqueName: \"kubernetes.io/projected/d82a8d2c-46a2-4c77-b524-57c894fbc0a0-kube-api-access-mkfxk\") pod \"router-default-5444994796-6czb8\" (UID: \"d82a8d2c-46a2-4c77-b524-57c894fbc0a0\") " pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.829095 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfnrf\" (UniqueName: \"kubernetes.io/projected/cf220ddc-cabc-43db-8281-d9304d65c625-kube-api-access-pfnrf\") pod \"machine-config-operator-74547568cd-5djwl\" (UID: \"cf220ddc-cabc-43db-8281-d9304d65c625\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.829231 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.829405 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.844476 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:21 crc kubenswrapper[4482]: E1125 06:49:21.845306 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.345275888 +0000 UTC m=+136.833507146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.860235 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79j8f\" (UniqueName: \"kubernetes.io/projected/299bc1da-cbd5-4574-8811-8fa2cf39529d-kube-api-access-79j8f\") pod \"service-ca-9c57cc56f-26zgh\" (UID: \"299bc1da-cbd5-4574-8811-8fa2cf39529d\") " pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.878014 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8flfl\" (UniqueName: \"kubernetes.io/projected/66b7a3ae-811e-43ea-8d7b-33793e9327b9-kube-api-access-8flfl\") pod \"service-ca-operator-777779d784-hbqb4\" (UID: \"66b7a3ae-811e-43ea-8d7b-33793e9327b9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.883312 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl"] Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.884504 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zhw8w"] Nov 25 06:49:21 crc kubenswrapper[4482]: W1125 06:49:21.940769 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef330858_933c_41ce_b34b_db48cd8e8200.slice/crio-68fc0f6532f89f714bb991e6ed1776353378a61abccb85059b50ccfaf7b1e20b WatchSource:0}: Error finding container 68fc0f6532f89f714bb991e6ed1776353378a61abccb85059b50ccfaf7b1e20b: Status 404 returned error can't find the container with id 68fc0f6532f89f714bb991e6ed1776353378a61abccb85059b50ccfaf7b1e20b Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.946254 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:21 crc kubenswrapper[4482]: E1125 06:49:21.946566 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.446554086 +0000 UTC m=+136.934785346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.980001 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" Nov 25 06:49:21 crc kubenswrapper[4482]: I1125 06:49:21.988655 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.017298 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gp4k7"] Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.021922 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.024185 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm"] Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.040352 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.049085 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.049572 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.549368332 +0000 UTC m=+137.037599592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.051416 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.051949 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.551926347 +0000 UTC m=+137.040167183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.067806 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.079898 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl"] Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.080150 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.152348 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.153033 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.653014427 +0000 UTC m=+137.141245686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:22 crc kubenswrapper[4482]: W1125 06:49:22.198833 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79ca89d9_d18a_4927_9c58_47754973b8ed.slice/crio-66f3b490001974256141113562d8bac9594e19a3825b92f031022a01a57e8bce WatchSource:0}: Error finding container 66f3b490001974256141113562d8bac9594e19a3825b92f031022a01a57e8bce: Status 404 returned error can't find the container with id 66f3b490001974256141113562d8bac9594e19a3825b92f031022a01a57e8bce Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.208199 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9"] Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.261338 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.263538 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.761707953 +0000 UTC m=+137.249939211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.315641 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-b248r"] Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.364865 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.365272 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.865250361 +0000 UTC m=+137.353481621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.367708 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.373041 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.873020808 +0000 UTC m=+137.361252067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.416963 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2h8cx"] Nov 25 06:49:22 crc kubenswrapper[4482]: W1125 06:49:22.442210 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8890596_b9fd_4710_9293_687c209c6090.slice/crio-7371604ffbc3a26b7fcb80d0febaf481ee739e5c3cba4fc9f8183f8ccded652a WatchSource:0}: Error finding container 7371604ffbc3a26b7fcb80d0febaf481ee739e5c3cba4fc9f8183f8ccded652a: Status 404 returned error can't find the container with id 7371604ffbc3a26b7fcb80d0febaf481ee739e5c3cba4fc9f8183f8ccded652a Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.448997 4482 generic.go:334] "Generic (PLEG): container finished" podID="43f33231-2b25-4a54-87da-e93c8cf3ee18" containerID="b247b086c6e57ec69eff198dc03ebef9fbbadf914de33bab2c39544012e326b7" exitCode=0 Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.449061 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" event={"ID":"43f33231-2b25-4a54-87da-e93c8cf3ee18","Type":"ContainerDied","Data":"b247b086c6e57ec69eff198dc03ebef9fbbadf914de33bab2c39544012e326b7"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.453446 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" event={"ID":"61e22994-72d9-477f-8f3f-89a77ade8196","Type":"ContainerStarted","Data":"55d8fa39095ca86072f32975b63d100616a08fd19b3f6199a2da3f58f0b2f91d"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.463624 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n56kp" event={"ID":"15832b7c-8637-457d-bf40-c9d8ae03445d","Type":"ContainerStarted","Data":"25a909a8bd9d8981f40770824f3d6c41e22aa37beb97bcea042015167392ea86"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.463657 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n56kp" event={"ID":"15832b7c-8637-457d-bf40-c9d8ae03445d","Type":"ContainerStarted","Data":"094b2784879e9ba130107d9850bc9ee0ddb9264d4ab95dd3365899610c64e1d3"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.465777 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" event={"ID":"1b992fb6-c183-4a39-9438-9ae970028bbf","Type":"ContainerStarted","Data":"a5f0747411c3c783249000c4c7506ff33361a294b8aadafaddd7326b31c8a2c5"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.465945 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9"] Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.469420 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.469528 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.96948741 +0000 UTC m=+137.457718670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.469679 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.470193 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:22.970162114 +0000 UTC m=+137.458393373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.471109 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" event={"ID":"ef330858-933c-41ce-b34b-db48cd8e8200","Type":"ContainerStarted","Data":"68fc0f6532f89f714bb991e6ed1776353378a61abccb85059b50ccfaf7b1e20b"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.476205 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gqc49" event={"ID":"368e9f64-0e31-464e-9714-b4b3ea73cc36","Type":"ContainerStarted","Data":"c593d278ac111fc337697164b4be24933956472aeca1f245f9690a4dd1d5a28d"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.485326 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ng9pj" event={"ID":"2452c3b9-85cf-4e8e-a20f-3adf5fb602c5","Type":"ContainerStarted","Data":"3d7314d89f583197c720caa8c8be714df3156f751b5f4f8046c2dedef6d3f221"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.505161 4482 generic.go:334] "Generic (PLEG): container finished" podID="1ee0a1d1-8292-47bf-885b-a154443af6f4" containerID="7a055e85fa225b570211394cf11a25b6fb4028ccae724ddfde41a3c5d382cf4d" exitCode=0 Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.505276 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" event={"ID":"1ee0a1d1-8292-47bf-885b-a154443af6f4","Type":"ContainerDied","Data":"7a055e85fa225b570211394cf11a25b6fb4028ccae724ddfde41a3c5d382cf4d"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.505310 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" event={"ID":"1ee0a1d1-8292-47bf-885b-a154443af6f4","Type":"ContainerStarted","Data":"4c6993b91c43238a202259eb6592a8266e7fa69852e9518e77c0a65af3a57dcc"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.537979 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" event={"ID":"79ca89d9-d18a-4927-9c58-47754973b8ed","Type":"ContainerStarted","Data":"66f3b490001974256141113562d8bac9594e19a3825b92f031022a01a57e8bce"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.545689 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5w6bs"] Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.561741 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" event={"ID":"40242495-a63d-4300-b420-f7eb4317ea0e","Type":"ContainerStarted","Data":"5f4f32f81b621b47094c9595a1f2f78b7f9900db5ef42900176678a3e43a5d5e"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.571481 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-v9rqm"] Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.571814 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.572424 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9tqlb" event={"ID":"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b","Type":"ContainerStarted","Data":"25e1e8d3eebbcfc8422f82fa1bf345933bb613e90b6ed7adc9853469903ea129"} Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.573701 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:23.07368224 +0000 UTC m=+137.561913500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.620658 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" event={"ID":"861a93e1-ffca-40f2-ada4-2f736f05ba1c","Type":"ContainerStarted","Data":"ff3132ad12f6a55bd3384442126c0fe2230f21228c651bfd73d11623c5447f95"} Nov 25 06:49:22 crc kubenswrapper[4482]: W1125 06:49:22.631144 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79f103eb_d897_4500_9dd0_995bc41bde7c.slice/crio-29ce34d3537d5aa5ecf131f474f4eb56ef1a0f29a69b1060c756de1633d155eb WatchSource:0}: Error finding container 29ce34d3537d5aa5ecf131f474f4eb56ef1a0f29a69b1060c756de1633d155eb: Status 404 returned error can't find the container with id 29ce34d3537d5aa5ecf131f474f4eb56ef1a0f29a69b1060c756de1633d155eb Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.645287 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" event={"ID":"00ecd959-d344-450d-91de-06136bac3d80","Type":"ContainerStarted","Data":"ba1ef9604a06ec0450fa9b089fcd4a03649b755195af002d90e2c2e1e82aba9a"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.647243 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr"] Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.671826 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88"] Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.676013 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.676260 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:23.176248569 +0000 UTC m=+137.664479827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.711388 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-78b9v" podStartSLOduration=117.711374042 podStartE2EDuration="1m57.711374042s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:22.710978666 +0000 UTC m=+137.199209925" watchObservedRunningTime="2025-11-25 06:49:22.711374042 +0000 UTC m=+137.199605301" Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.723008 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gp4k7" event={"ID":"48ceac98-86e6-40c5-842f-775af04e420a","Type":"ContainerStarted","Data":"fef4f5c005dc6696c484b1e2839a3e004afe0fd7d556639d65b399bb24e620df"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.758633 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z" event={"ID":"f2057b44-f9f5-426d-ac80-b3c576dcb59c","Type":"ContainerStarted","Data":"52ab7877f19e3d0302621a51719319b03007ed15698796d6d0db51597a764662"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.758669 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z" event={"ID":"f2057b44-f9f5-426d-ac80-b3c576dcb59c","Type":"ContainerStarted","Data":"8a7ecd24b1acddb1c057f7ac701164ad60e8fff31256f46647431d69297726bf"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.777642 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.783187 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:23.283150726 +0000 UTC m=+137.771381984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.826674 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-78vqp" podStartSLOduration=117.826655483 podStartE2EDuration="1m57.826655483s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:22.824100534 +0000 UTC m=+137.312331793" watchObservedRunningTime="2025-11-25 06:49:22.826655483 +0000 UTC m=+137.314886742" Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.846543 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" event={"ID":"db2a2377-c791-40ef-80e9-15b3884ec7a4","Type":"ContainerStarted","Data":"6ec7db4b5cf9a44702a28fd2a7a7b442fd04bdd8a73c49c3f265b94431b16713"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.850380 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" event={"ID":"db2a2377-c791-40ef-80e9-15b3884ec7a4","Type":"ContainerStarted","Data":"76aa40f7fb8bf4d54de6a5195dae6e6b1696a30ef12a49f612664fefcfbb2529"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.867479 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" event={"ID":"d0235c8b-901e-4439-8d57-44af3ea11486","Type":"ContainerStarted","Data":"842a773cd2031fae2d9fd11d37514a62ec5664d89e958371ddcac2a6f0e50fc1"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.867527 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" event={"ID":"d0235c8b-901e-4439-8d57-44af3ea11486","Type":"ContainerStarted","Data":"8c8291823dc54dcd00e329ae0a15ce24a37c51b94a1014ef2d03769c2e021fe4"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.871387 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" event={"ID":"703d9af4-44eb-40f1-a27f-87668bec5700","Type":"ContainerStarted","Data":"1f2df26de122952068c9c5791492c4b34ca3aa87105d5f350b2b971ab334c74b"} Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.875761 4482 patch_prober.go:28] interesting pod/downloads-7954f5f757-78b9v container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.875799 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78b9v" podUID="13c2044e-5435-4487-be5b-fafa43b6db3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.884424 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.884696 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:23.384684957 +0000 UTC m=+137.872916215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.885532 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:49:22 crc kubenswrapper[4482]: I1125 06:49:22.985264 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:22 crc kubenswrapper[4482]: E1125 06:49:22.986115 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:23.486102028 +0000 UTC m=+137.974333287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.009985 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-58h2l"] Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.043909 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8"] Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.088217 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:23 crc kubenswrapper[4482]: E1125 06:49:23.088522 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:23.588507723 +0000 UTC m=+138.076738982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.103042 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" podStartSLOduration=118.103029213 podStartE2EDuration="1m58.103029213s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:23.100246644 +0000 UTC m=+137.588477923" watchObservedRunningTime="2025-11-25 06:49:23.103029213 +0000 UTC m=+137.591260472" Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.191795 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:23 crc kubenswrapper[4482]: E1125 06:49:23.194507 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:23.69448754 +0000 UTC m=+138.182718798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.296316 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:23 crc kubenswrapper[4482]: E1125 06:49:23.296577 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:23.796565737 +0000 UTC m=+138.284796997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.315391 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz"] Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.334979 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f"] Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.396895 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:23 crc kubenswrapper[4482]: E1125 06:49:23.398038 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:23.898016941 +0000 UTC m=+138.386248210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.499768 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:23 crc kubenswrapper[4482]: E1125 06:49:23.523996 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:24.023982375 +0000 UTC m=+138.512213634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.537735 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm"] Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.544121 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j675n" podStartSLOduration=118.544109558 podStartE2EDuration="1m58.544109558s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:23.514688095 +0000 UTC m=+138.002919343" watchObservedRunningTime="2025-11-25 06:49:23.544109558 +0000 UTC m=+138.032340818" Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.604638 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:23 crc kubenswrapper[4482]: E1125 06:49:23.605013 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:24.105002524 +0000 UTC m=+138.593233782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.614660 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-26zgh"] Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.664230 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gvbtp"] Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.665063 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ng9pj" podStartSLOduration=5.665043421 podStartE2EDuration="5.665043421s" podCreationTimestamp="2025-11-25 06:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:23.647822179 +0000 UTC m=+138.136053438" watchObservedRunningTime="2025-11-25 06:49:23.665043421 +0000 UTC m=+138.153274670" Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.689013 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vn9jt" podStartSLOduration=118.688994507 podStartE2EDuration="1m58.688994507s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:23.688457104 +0000 UTC m=+138.176688363" watchObservedRunningTime="2025-11-25 06:49:23.688994507 +0000 UTC m=+138.177225766" Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.707910 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:23 crc kubenswrapper[4482]: E1125 06:49:23.710633 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:24.210619035 +0000 UTC m=+138.698850294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.733492 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5"] Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.784051 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4"] Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.802970 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl"] Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.816363 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:23 crc kubenswrapper[4482]: E1125 06:49:23.817420 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:24.317290887 +0000 UTC m=+138.805522146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.861832 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz"] Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.913736 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" event={"ID":"61e22994-72d9-477f-8f3f-89a77ade8196","Type":"ContainerStarted","Data":"9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a"} Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.914251 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.916855 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6czb8" event={"ID":"d82a8d2c-46a2-4c77-b524-57c894fbc0a0","Type":"ContainerStarted","Data":"a1135c9d875a47dc181b1fdc8a60dc326dcb3aa6bf723ab6e06ffe4d6ffdccad"} Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.918022 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-b248r" event={"ID":"2f1e7a69-3cac-4d41-9fa2-72f14d7171be","Type":"ContainerStarted","Data":"719a7ea4fd5227e24b691f867c2772adebfac05898871a3722daa8cb274894ef"} Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.919360 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:23 crc kubenswrapper[4482]: E1125 06:49:23.920239 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:24.420220922 +0000 UTC m=+138.908452180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.921831 4482 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-f8zk7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.921864 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" podUID="61e22994-72d9-477f-8f3f-89a77ade8196" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.927306 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gqc49" event={"ID":"368e9f64-0e31-464e-9714-b4b3ea73cc36","Type":"ContainerStarted","Data":"5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb"} Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.931611 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z" event={"ID":"f2057b44-f9f5-426d-ac80-b3c576dcb59c","Type":"ContainerStarted","Data":"a742d2d61c69af8bd61fce8cafde6d0f40bc753b1b51e21a0824d0006de4a20d"} Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.937840 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" event={"ID":"40242495-a63d-4300-b420-f7eb4317ea0e","Type":"ContainerStarted","Data":"c4c3092dc86b0d4c0006dff0957baca3b4b19b4f90e26a6d18e1603b9c64afe0"} Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.941284 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" podStartSLOduration=118.941264663 podStartE2EDuration="1m58.941264663s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:23.938281526 +0000 UTC m=+138.426512786" watchObservedRunningTime="2025-11-25 06:49:23.941264663 +0000 UTC m=+138.429495923" Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.949380 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" event={"ID":"8200abb3-4189-4dae-b0d3-9f09c330e278","Type":"ContainerStarted","Data":"3611aa54af4ef37f4d560c8d12207c8ec89e0ac797a19216fa57c63c7a9ce437"} Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.949406 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" event={"ID":"8200abb3-4189-4dae-b0d3-9f09c330e278","Type":"ContainerStarted","Data":"6c699d868ecbf7f581256b341cd2ab5574d13b648e360945bbb20ea7dd967dde"} Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.949982 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.968533 4482 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2h8cx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.968566 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" podUID="8200abb3-4189-4dae-b0d3-9f09c330e278" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.970865 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" event={"ID":"ef330858-933c-41ce-b34b-db48cd8e8200","Type":"ContainerStarted","Data":"2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5"} Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.971671 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.978561 4482 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-shnd8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.978611 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" podUID="ef330858-933c-41ce-b34b-db48cd8e8200" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 25 06:49:23 crc kubenswrapper[4482]: I1125 06:49:23.995015 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" event={"ID":"ebdc3669-daa5-4220-9042-265024c56738","Type":"ContainerStarted","Data":"294083e7b62db2620bd0f2c9c13fe817b104b74b169f87dbb455f26b4addc609"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.007918 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6t25z" podStartSLOduration=119.007906021 podStartE2EDuration="1m59.007906021s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:23.980694205 +0000 UTC m=+138.468925464" watchObservedRunningTime="2025-11-25 06:49:24.007906021 +0000 UTC m=+138.496137280" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.008218 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-gqc49" podStartSLOduration=119.00821312 podStartE2EDuration="1m59.00821312s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:24.006841944 +0000 UTC m=+138.495073203" watchObservedRunningTime="2025-11-25 06:49:24.00821312 +0000 UTC m=+138.496444379" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.009972 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" event={"ID":"6735e099-a06c-4b53-8c17-c3f644d7ba91","Type":"ContainerStarted","Data":"dad44f2cc55df69cdccc45e66b427dd71ef2c3d7f87b83bf9e772de895cc95b0"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.021837 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:24 crc kubenswrapper[4482]: E1125 06:49:24.022876 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:24.522853464 +0000 UTC m=+139.011084723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.044672 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9tqlb" event={"ID":"32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b","Type":"ContainerStarted","Data":"6dd9b97446813ddd2338c8296e471636c42df1aa7da4041918e27a421b981a74"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.045333 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.054713 4482 patch_prober.go:28] interesting pod/console-operator-58897d9998-9tqlb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.054745 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9tqlb" podUID="32e58bfd-26ac-4e78-89f2-eeb3c6d6cf1b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.058337 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" event={"ID":"9ff92469-ca47-4359-b56a-8df7332739ab","Type":"ContainerStarted","Data":"950bddd38864b361c536524818498b89cd4663f803629fc794d1803f37e7c730"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.070466 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" podStartSLOduration=119.070448427 podStartE2EDuration="1m59.070448427s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:24.050065831 +0000 UTC m=+138.538297090" watchObservedRunningTime="2025-11-25 06:49:24.070448427 +0000 UTC m=+138.558679686" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.091672 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" event={"ID":"861a93e1-ffca-40f2-ada4-2f736f05ba1c","Type":"ContainerStarted","Data":"5bdf8b51ab96277255f794f9ea9f02ca13227c7cfbd7d9329e90ddc9f0578253"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.092681 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" podStartSLOduration=119.09266947 podStartE2EDuration="1m59.09266947s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:24.071063186 +0000 UTC m=+138.559294445" watchObservedRunningTime="2025-11-25 06:49:24.09266947 +0000 UTC m=+138.580900728" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.114000 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-9tqlb" podStartSLOduration=119.113986538 podStartE2EDuration="1m59.113986538s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:24.091438378 +0000 UTC m=+138.579669657" watchObservedRunningTime="2025-11-25 06:49:24.113986538 +0000 UTC m=+138.602217797" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.115742 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dzgqh" podStartSLOduration=119.115734875 podStartE2EDuration="1m59.115734875s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:24.114587921 +0000 UTC m=+138.602819180" watchObservedRunningTime="2025-11-25 06:49:24.115734875 +0000 UTC m=+138.603966124" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.124245 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:24 crc kubenswrapper[4482]: E1125 06:49:24.125071 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:24.625053963 +0000 UTC m=+139.113285222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.132954 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" event={"ID":"00ecd959-d344-450d-91de-06136bac3d80","Type":"ContainerStarted","Data":"73265d350bfbecbc142092db22c08220227db84b2a2d5acd96c7cc74af867cb8"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.157329 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" event={"ID":"ec6261f9-cc3f-4940-9144-7617d2b81676","Type":"ContainerStarted","Data":"1a7e8d3b23ebeed5f8145d400654dfdd53b17a4a12ef21ef1d7fa0c9a8f8837a"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.165104 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" event={"ID":"299bc1da-cbd5-4574-8811-8fa2cf39529d","Type":"ContainerStarted","Data":"36a34143d0f98dc0acc2a7174244f718526ef934e84d0c7ef7ac7135d3b4b836"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.227050 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:24 crc kubenswrapper[4482]: E1125 06:49:24.227872 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:24.727827091 +0000 UTC m=+139.216058350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:24 crc kubenswrapper[4482]: E1125 06:49:24.230111 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:24.730094387 +0000 UTC m=+139.218325646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.231130 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" event={"ID":"340a9fad-eda3-46b1-a1d2-64231fb78d62","Type":"ContainerStarted","Data":"277956c4ad2b0fcd060e6e98f23ae59a2cd4e9be3ab47134889978a8cb7f74f4"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.239277 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.255415 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-58h2l" event={"ID":"1e2cfd46-a0a5-4138-9093-b4bd411c6390","Type":"ContainerStarted","Data":"82feaabd6f66bd79097f859dbf15defb41bf6a22a1890ce41da2cf8a64f85f81"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.270095 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" event={"ID":"703d9af4-44eb-40f1-a27f-87668bec5700","Type":"ContainerStarted","Data":"af109f2c2d2ff39883c580fb37e03d8fec1b7fb9a9107869a1712166d73ec28f"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.297030 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ng9pj" event={"ID":"2452c3b9-85cf-4e8e-a20f-3adf5fb602c5","Type":"ContainerStarted","Data":"ed7b04aab62c8b1b2dd107cc921f550012731bdd41bfa78704ae7241fd302c7d"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.332670 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" event={"ID":"c6901f70-de25-46df-a04b-7e1dcb979454","Type":"ContainerStarted","Data":"1312284831be2e1bfac62ef83e82541c7386c632675f20b47968f90ad8103a2f"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.338662 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" event={"ID":"1b992fb6-c183-4a39-9438-9ae970028bbf","Type":"ContainerStarted","Data":"d92cd72092477392e219eaf9e47e5efd1d4ac7981e9096f26c2c1f776fc45027"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.342121 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:24 crc kubenswrapper[4482]: E1125 06:49:24.343970 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:24.843946612 +0000 UTC m=+139.332177871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.344362 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" event={"ID":"4c097a8f-db6e-4f47-b014-1c9c75a92ad8","Type":"ContainerStarted","Data":"219a9a6b3abaf32e38e7ddea3bf1961ab1b59e2caccb8987274fe87c96f7bce0"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.348469 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-v9rqm" event={"ID":"e2b7e856-0bf2-44b9-868c-8181204573c4","Type":"ContainerStarted","Data":"00f2974e10cc9653ae317f2e91ee9c5add57a96ec1b9b3c9dba1994137431fe1"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.363498 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9" event={"ID":"79f103eb-d897-4500-9dd0-995bc41bde7c","Type":"ContainerStarted","Data":"29ce34d3537d5aa5ecf131f474f4eb56ef1a0f29a69b1060c756de1633d155eb"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.377234 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-zhw8w" podStartSLOduration=119.377216757 podStartE2EDuration="1m59.377216757s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:24.309353765 +0000 UTC m=+138.797585024" watchObservedRunningTime="2025-11-25 06:49:24.377216757 +0000 UTC m=+138.865448016" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.413833 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" event={"ID":"d8890596-b9fd-4710-9293-687c209c6090","Type":"ContainerStarted","Data":"de35dbf94973dfdefe82a1be33f3eba939a823dc855770223f21e8a80654488d"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.413882 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" event={"ID":"d8890596-b9fd-4710-9293-687c209c6090","Type":"ContainerStarted","Data":"7371604ffbc3a26b7fcb80d0febaf481ee739e5c3cba4fc9f8183f8ccded652a"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.414448 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.421793 4482 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-djrs9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.421861 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" podUID="d8890596-b9fd-4710-9293-687c209c6090" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.436848 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gnvtm" podStartSLOduration=119.436829316 podStartE2EDuration="1m59.436829316s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:24.378329636 +0000 UTC m=+138.866560894" watchObservedRunningTime="2025-11-25 06:49:24.436829316 +0000 UTC m=+138.925060575" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.437301 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" podStartSLOduration=119.437295505 podStartE2EDuration="1m59.437295505s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:24.437152196 +0000 UTC m=+138.925383455" watchObservedRunningTime="2025-11-25 06:49:24.437295505 +0000 UTC m=+138.925526765" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.444508 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:24 crc kubenswrapper[4482]: E1125 06:49:24.445952 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:24.945941283 +0000 UTC m=+139.434172541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.493029 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" event={"ID":"754234c1-cad7-452b-b7af-be15353682c9","Type":"ContainerStarted","Data":"c772e760e34fd2fbf75da52d3a8eea25ff65332055a905667ea4829de4e31048"} Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.494220 4482 patch_prober.go:28] interesting pod/downloads-7954f5f757-78b9v container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.494268 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78b9v" podUID="13c2044e-5435-4487-be5b-fafa43b6db3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.545700 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:24 crc kubenswrapper[4482]: E1125 06:49:24.546321 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.046303474 +0000 UTC m=+139.534534733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.649434 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:24 crc kubenswrapper[4482]: E1125 06:49:24.652950 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.152925712 +0000 UTC m=+139.641156971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.750500 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:24 crc kubenswrapper[4482]: E1125 06:49:24.750785 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.250772919 +0000 UTC m=+139.739004179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.853217 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:24 crc kubenswrapper[4482]: E1125 06:49:24.853796 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.353783816 +0000 UTC m=+139.842015066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.957830 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:24 crc kubenswrapper[4482]: E1125 06:49:24.958749 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.458711158 +0000 UTC m=+139.946942417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:24 crc kubenswrapper[4482]: I1125 06:49:24.958886 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:24 crc kubenswrapper[4482]: E1125 06:49:24.959313 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.459305438 +0000 UTC m=+139.947536697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.059708 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:25 crc kubenswrapper[4482]: E1125 06:49:25.059861 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.559846446 +0000 UTC m=+140.048077706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.063918 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:25 crc kubenswrapper[4482]: E1125 06:49:25.064451 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.564438498 +0000 UTC m=+140.052669757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.165209 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:25 crc kubenswrapper[4482]: E1125 06:49:25.165550 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.665537539 +0000 UTC m=+140.153768798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.266810 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:25 crc kubenswrapper[4482]: E1125 06:49:25.267152 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.767143065 +0000 UTC m=+140.255374324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.367672 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:25 crc kubenswrapper[4482]: E1125 06:49:25.367813 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.867793329 +0000 UTC m=+140.356024588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.368522 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:25 crc kubenswrapper[4482]: E1125 06:49:25.368979 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.868967503 +0000 UTC m=+140.357198763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.470758 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:25 crc kubenswrapper[4482]: E1125 06:49:25.471365 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:25.971323926 +0000 UTC m=+140.459555185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.507354 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-58h2l" event={"ID":"1e2cfd46-a0a5-4138-9093-b4bd411c6390","Type":"ContainerStarted","Data":"e2ccd4e72611e7a6533b21f962cbf932beca4134e790c9c3a99a41d719680bb6"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.507667 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-58h2l" event={"ID":"1e2cfd46-a0a5-4138-9093-b4bd411c6390","Type":"ContainerStarted","Data":"5fedae9549ec45997ad304cbaeffeebeb7b0940317f701f569adfb539fd4cda3"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.508929 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" event={"ID":"ebdc3669-daa5-4220-9042-265024c56738","Type":"ContainerStarted","Data":"57d61c2a4ea2597ec05ce87f686f42430d6e8ea203ffc6f7da16024817ebb2f4"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.510544 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" event={"ID":"754234c1-cad7-452b-b7af-be15353682c9","Type":"ContainerStarted","Data":"1a397d9908582da446aa44c816f526ce7003e0bf28740f47c0c21c4958fdd7a4"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.511127 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.524614 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" event={"ID":"40242495-a63d-4300-b420-f7eb4317ea0e","Type":"ContainerStarted","Data":"0ec03709935435f3585d951e3887de2dde39478bd48ef042d29cd201251182e0"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.528785 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-b248r" event={"ID":"2f1e7a69-3cac-4d41-9fa2-72f14d7171be","Type":"ContainerStarted","Data":"962e741632ef3af01133e615d194560fd90fc83d0b47eb6428adfb222d5c5102"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.534279 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" event={"ID":"4c097a8f-db6e-4f47-b014-1c9c75a92ad8","Type":"ContainerStarted","Data":"7400594a0ec7ad4ccd79f02dac7f1e4e380178db55509e22175ed5408079df53"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.542623 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" event={"ID":"1ee0a1d1-8292-47bf-885b-a154443af6f4","Type":"ContainerStarted","Data":"65dd60a6ba87214983c7d11a3eed26466021897de2f1c6ed423bf5972a8bbd7f"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.543153 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.555268 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" event={"ID":"340a9fad-eda3-46b1-a1d2-64231fb78d62","Type":"ContainerStarted","Data":"494eb14a6f18c624e01ae66f68e137211d85486db520f1910d8631e732551157"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.555344 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" event={"ID":"340a9fad-eda3-46b1-a1d2-64231fb78d62","Type":"ContainerStarted","Data":"8a5ab808020bca6837806f83e6d7175bb2c3bd76831560be3a5922e4739e231f"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.555365 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.571316 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n56kp" event={"ID":"15832b7c-8637-457d-bf40-c9d8ae03445d","Type":"ContainerStarted","Data":"b3119ac63df4ef1da7e48a876761b3a39a22fef7ed8376c3ab3188567bb52aeb"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.572391 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:25 crc kubenswrapper[4482]: E1125 06:49:25.572670 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:26.072659122 +0000 UTC m=+140.560890381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.578198 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" event={"ID":"ec6261f9-cc3f-4940-9144-7617d2b81676","Type":"ContainerStarted","Data":"f9a3c31fe5bc2ae3cd5fc426eb23ac717a33f87a2451c4222923f7464bb63acb"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.578741 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.582277 4482 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-689dm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.582318 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" podUID="ec6261f9-cc3f-4940-9144-7617d2b81676" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.588867 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-58h2l" podStartSLOduration=120.588853298 podStartE2EDuration="2m0.588853298s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:25.550013899 +0000 UTC m=+140.038245158" watchObservedRunningTime="2025-11-25 06:49:25.588853298 +0000 UTC m=+140.077084546" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.598569 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" event={"ID":"00ecd959-d344-450d-91de-06136bac3d80","Type":"ContainerStarted","Data":"b660d52745bbe20eda866995a8bb01d42daece6319cdd37b1eff322194cf5954"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.600446 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gp4k7" event={"ID":"48ceac98-86e6-40c5-842f-775af04e420a","Type":"ContainerStarted","Data":"5c54630358d39e930a206e0a0d7e20142543e0b6856bd9f1b8e0f2e1c553be72"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.600825 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.602355 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" event={"ID":"cf220ddc-cabc-43db-8281-d9304d65c625","Type":"ContainerStarted","Data":"6fa76ef413854ebfce518ad8069e44c420d64023255540211b04e394c8c9c088"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.602380 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" event={"ID":"cf220ddc-cabc-43db-8281-d9304d65c625","Type":"ContainerStarted","Data":"a5dfe659d52304ae24dc1dd5a2deb03ddb6348414b84b96f6a1871d392f0948d"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.602392 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" event={"ID":"cf220ddc-cabc-43db-8281-d9304d65c625","Type":"ContainerStarted","Data":"11b2c8bdabd3783873f028ec0072eb0a5709acb78606227944d7c008a41d5bfc"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.640437 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" event={"ID":"299bc1da-cbd5-4574-8811-8fa2cf39529d","Type":"ContainerStarted","Data":"0cf382118bdf3a32cebf2ab765801436551f417e94b50a7062e3d7447acc84e3"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.648871 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-b248r" podStartSLOduration=7.648848419 podStartE2EDuration="7.648848419s" podCreationTimestamp="2025-11-25 06:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:25.64851498 +0000 UTC m=+140.136746239" watchObservedRunningTime="2025-11-25 06:49:25.648848419 +0000 UTC m=+140.137079678" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.649327 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" podStartSLOduration=120.649319297 podStartE2EDuration="2m0.649319297s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:25.590234602 +0000 UTC m=+140.078465862" watchObservedRunningTime="2025-11-25 06:49:25.649319297 +0000 UTC m=+140.137550556" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.673399 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:25 crc kubenswrapper[4482]: E1125 06:49:25.674912 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:26.174897602 +0000 UTC m=+140.663128862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.691235 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" event={"ID":"66b7a3ae-811e-43ea-8d7b-33793e9327b9","Type":"ContainerStarted","Data":"4a214398210d87606029f92fd1dd8f3398675e2015ef6ebd36d3d0eb991ad6a6"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.691282 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" event={"ID":"66b7a3ae-811e-43ea-8d7b-33793e9327b9","Type":"ContainerStarted","Data":"127cd2e70b39082c7284606dc355d8f9dd95de5e97d0f2c3322533a4cc6bcf2d"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.716536 4482 generic.go:334] "Generic (PLEG): container finished" podID="79ca89d9-d18a-4927-9c58-47754973b8ed" containerID="a9db010c3331c09ffaeeb9ac0ac96256076f8fc651dd06a3a41bfa779964e72a" exitCode=0 Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.716605 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" event={"ID":"79ca89d9-d18a-4927-9c58-47754973b8ed","Type":"ContainerDied","Data":"a9db010c3331c09ffaeeb9ac0ac96256076f8fc651dd06a3a41bfa779964e72a"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.737499 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" event={"ID":"284d18dc-91eb-4c28-937a-8f7a03e32af0","Type":"ContainerStarted","Data":"9308d588e7916e8249935270d3f47eb064b485bbc715fc483f8bd4dd5a7eed82"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.760963 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" event={"ID":"43f33231-2b25-4a54-87da-e93c8cf3ee18","Type":"ContainerStarted","Data":"5ce1f92c65c83a0bae3f14b84f03c3b1b80d9014192f453bd9adbc4dc9af40e7"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.761001 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" event={"ID":"43f33231-2b25-4a54-87da-e93c8cf3ee18","Type":"ContainerStarted","Data":"0cd134b480f5cce2f8075235b1753f3a4c2dd8ff618bad3fe3be120d82f5edeb"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.776876 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:25 crc kubenswrapper[4482]: E1125 06:49:25.777826 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:26.277814532 +0000 UTC m=+140.766045791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.785566 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" podStartSLOduration=120.785553459 podStartE2EDuration="2m0.785553459s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:25.728744597 +0000 UTC m=+140.216975856" watchObservedRunningTime="2025-11-25 06:49:25.785553459 +0000 UTC m=+140.273784718" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.786159 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-v9rqm" event={"ID":"e2b7e856-0bf2-44b9-868c-8181204573c4","Type":"ContainerStarted","Data":"9a67069ac5e011da9ec6dbe08edf54c516f3982c0ca4d02347179dfaab2d05c1"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.815896 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" event={"ID":"8792fd68-7e83-485d-af18-3d521ab37cbd","Type":"ContainerStarted","Data":"d8f8ca27a8e70bd9055762a4102456b8c6c099ffb40e12b5a4a7844a86699bdc"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.815929 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" event={"ID":"8792fd68-7e83-485d-af18-3d521ab37cbd","Type":"ContainerStarted","Data":"8e03a39b9bd5200df926ea9a734c9d320d2e7d5f6c4ad5d811fb53bd69c426a3"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.817332 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" event={"ID":"9ff92469-ca47-4359-b56a-8df7332739ab","Type":"ContainerStarted","Data":"15fbe8f652383d0e7eda94bc0e38826dbb0cd557ed7d2c674bd037ed6e133196"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.818452 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9" event={"ID":"79f103eb-d897-4500-9dd0-995bc41bde7c","Type":"ContainerStarted","Data":"003575810d8f9d8ca424aa4278f5e1db29688674643a36b243926f30deb95471"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.821402 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" event={"ID":"c6901f70-de25-46df-a04b-7e1dcb979454","Type":"ContainerStarted","Data":"9b0dc4b30eb28909de83108a7a4b07b114c1015120bf6d549f4a3fc8ad53e4b5"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.821429 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" event={"ID":"c6901f70-de25-46df-a04b-7e1dcb979454","Type":"ContainerStarted","Data":"665af21204f907d75a7053cac82fd795dc8c279849630f381c4388bb3ab878fd"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.830365 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6czb8" event={"ID":"d82a8d2c-46a2-4c77-b524-57c894fbc0a0","Type":"ContainerStarted","Data":"92492a326426aa8c5930e5b36cc7d321eea5288c39d618a880e976343762f7f0"} Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.834234 4482 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2h8cx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.834285 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" podUID="8200abb3-4189-4dae-b0d3-9f09c330e278" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.846211 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.846257 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-djrs9" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.849376 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.877858 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:25 crc kubenswrapper[4482]: E1125 06:49:25.878893 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:26.378879768 +0000 UTC m=+140.867111027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.922488 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-9tqlb" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.952133 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-b8j88" podStartSLOduration=120.952121786 podStartE2EDuration="2m0.952121786s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:25.786790722 +0000 UTC m=+140.275021981" watchObservedRunningTime="2025-11-25 06:49:25.952121786 +0000 UTC m=+140.440353045" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.980303 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:25 crc kubenswrapper[4482]: E1125 06:49:25.982798 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:26.482784832 +0000 UTC m=+140.971016091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.989228 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:25 crc kubenswrapper[4482]: I1125 06:49:25.989869 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.085072 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.085554 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:26 crc kubenswrapper[4482]: E1125 06:49:26.085602 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:26.585591033 +0000 UTC m=+141.073822292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.086111 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:26 crc kubenswrapper[4482]: E1125 06:49:26.086379 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:26.586372196 +0000 UTC m=+141.074603455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.093794 4482 patch_prober.go:28] interesting pod/router-default-5444994796-6czb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 06:49:26 crc kubenswrapper[4482]: [-]has-synced failed: reason withheld Nov 25 06:49:26 crc kubenswrapper[4482]: [+]process-running ok Nov 25 06:49:26 crc kubenswrapper[4482]: healthz check failed Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.093826 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6czb8" podUID="d82a8d2c-46a2-4c77-b524-57c894fbc0a0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.126557 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-5w6bs" podStartSLOduration=121.126536332 podStartE2EDuration="2m1.126536332s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.125528271 +0000 UTC m=+140.613759530" watchObservedRunningTime="2025-11-25 06:49:26.126536332 +0000 UTC m=+140.614767592" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.128336 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zhtl" podStartSLOduration=121.128325257 podStartE2EDuration="2m1.128325257s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.059055031 +0000 UTC m=+140.547286290" watchObservedRunningTime="2025-11-25 06:49:26.128325257 +0000 UTC m=+140.616556516" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.184013 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p9xxv" podStartSLOduration=123.183995942 podStartE2EDuration="2m3.183995942s" podCreationTimestamp="2025-11-25 06:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.178463166 +0000 UTC m=+140.666694424" watchObservedRunningTime="2025-11-25 06:49:26.183995942 +0000 UTC m=+140.672227201" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.187834 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:26 crc kubenswrapper[4482]: E1125 06:49:26.188115 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:26.688096466 +0000 UTC m=+141.176327725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.188305 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:26 crc kubenswrapper[4482]: E1125 06:49:26.188598 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:26.688589026 +0000 UTC m=+141.176820284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.301735 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:26 crc kubenswrapper[4482]: E1125 06:49:26.302196 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:26.802178224 +0000 UTC m=+141.290409483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.400725 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-n56kp" podStartSLOduration=121.400705354 podStartE2EDuration="2m1.400705354s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.300984603 +0000 UTC m=+140.789215862" watchObservedRunningTime="2025-11-25 06:49:26.400705354 +0000 UTC m=+140.888936643" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.405497 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:26 crc kubenswrapper[4482]: E1125 06:49:26.406096 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:26.906080112 +0000 UTC m=+141.394311371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.512953 4482 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4kxk8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.513316 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" podUID="754234c1-cad7-452b-b7af-be15353682c9" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.514324 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:26 crc kubenswrapper[4482]: E1125 06:49:26.514911 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:27.014890036 +0000 UTC m=+141.503121295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.547930 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" podStartSLOduration=121.547906872 podStartE2EDuration="2m1.547906872s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.481697009 +0000 UTC m=+140.969928268" watchObservedRunningTime="2025-11-25 06:49:26.547906872 +0000 UTC m=+141.036138131" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.549704 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5djwl" podStartSLOduration=121.549693453 podStartE2EDuration="2m1.549693453s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.542478954 +0000 UTC m=+141.030710214" watchObservedRunningTime="2025-11-25 06:49:26.549693453 +0000 UTC m=+141.037924711" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.604330 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-v9rqm" podStartSLOduration=121.604311402 podStartE2EDuration="2m1.604311402s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.604306171 +0000 UTC m=+141.092537431" watchObservedRunningTime="2025-11-25 06:49:26.604311402 +0000 UTC m=+141.092542661" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.618871 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:26 crc kubenswrapper[4482]: E1125 06:49:26.619270 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:27.119257072 +0000 UTC m=+141.607488331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.694972 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4npdz" podStartSLOduration=121.694955461 podStartE2EDuration="2m1.694955461s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.648256089 +0000 UTC m=+141.136487348" watchObservedRunningTime="2025-11-25 06:49:26.694955461 +0000 UTC m=+141.183186721" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.695822 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" podStartSLOduration=121.695816416 podStartE2EDuration="2m1.695816416s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.694728985 +0000 UTC m=+141.182960244" watchObservedRunningTime="2025-11-25 06:49:26.695816416 +0000 UTC m=+141.184047675" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.720390 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:26 crc kubenswrapper[4482]: E1125 06:49:26.720595 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:27.220564606 +0000 UTC m=+141.708795865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.720697 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:26 crc kubenswrapper[4482]: E1125 06:49:26.721210 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:27.221190356 +0000 UTC m=+141.709421615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.746466 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" podStartSLOduration=121.746446834 podStartE2EDuration="2m1.746446834s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.744963225 +0000 UTC m=+141.233194484" watchObservedRunningTime="2025-11-25 06:49:26.746446834 +0000 UTC m=+141.234678093" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.766404 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fv75f" podStartSLOduration=121.766383828 podStartE2EDuration="2m1.766383828s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.760876872 +0000 UTC m=+141.249108131" watchObservedRunningTime="2025-11-25 06:49:26.766383828 +0000 UTC m=+141.254615088" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.822645 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:26 crc kubenswrapper[4482]: E1125 06:49:26.823282 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:27.323263303 +0000 UTC m=+141.811494562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.835767 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-v9rqm" event={"ID":"e2b7e856-0bf2-44b9-868c-8181204573c4","Type":"ContainerStarted","Data":"81aad474113b900582f8e921cee4bf6293511c2d95109df042ff8f9ec02feac5"} Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.838088 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" event={"ID":"79ca89d9-d18a-4927-9c58-47754973b8ed","Type":"ContainerStarted","Data":"8c748220e772f0672df25dd8d278a093c721700806c1a8bf46270e84a82d5476"} Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.839321 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" event={"ID":"6735e099-a06c-4b53-8c17-c3f644d7ba91","Type":"ContainerStarted","Data":"00e4546c8d829123433cd2c3e995828fff3ee900e7081a3063c07bf0a482e7cd"} Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.840416 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" event={"ID":"284d18dc-91eb-4c28-937a-8f7a03e32af0","Type":"ContainerStarted","Data":"939a3e37af1b7e0ca0ce353f63100bc4b0a14313f45d733a298c87200f4f1bb6"} Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.841702 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gp4k7" event={"ID":"48ceac98-86e6-40c5-842f-775af04e420a","Type":"ContainerStarted","Data":"3225a165af1bef069594ce3633eb6f29c1e419fcca56af4cbdaf6f91dc07ce38"} Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.843460 4482 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2h8cx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.843604 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" podUID="8200abb3-4189-4dae-b0d3-9f09c330e278" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.930963 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:26 crc kubenswrapper[4482]: E1125 06:49:26.933080 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:27.433061573 +0000 UTC m=+141.921292832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.936502 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-689dm" Nov 25 06:49:26 crc kubenswrapper[4482]: I1125 06:49:26.949123 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" podStartSLOduration=121.949103972 podStartE2EDuration="2m1.949103972s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.948151324 +0000 UTC m=+141.436382593" watchObservedRunningTime="2025-11-25 06:49:26.949103972 +0000 UTC m=+141.437335230" Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.022143 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gp4k7" podStartSLOduration=9.022123128 podStartE2EDuration="9.022123128s" podCreationTimestamp="2025-11-25 06:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:26.986452838 +0000 UTC m=+141.474684097" watchObservedRunningTime="2025-11-25 06:49:27.022123128 +0000 UTC m=+141.510354387" Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.033009 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:27 crc kubenswrapper[4482]: E1125 06:49:27.033358 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:27.533345345 +0000 UTC m=+142.021576604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.097410 4482 patch_prober.go:28] interesting pod/router-default-5444994796-6czb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 06:49:27 crc kubenswrapper[4482]: [-]has-synced failed: reason withheld Nov 25 06:49:27 crc kubenswrapper[4482]: [+]process-running ok Nov 25 06:49:27 crc kubenswrapper[4482]: healthz check failed Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.135271 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6czb8" podUID="d82a8d2c-46a2-4c77-b524-57c894fbc0a0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.095118 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lmqb9" podStartSLOduration=122.095103182 podStartE2EDuration="2m2.095103182s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:27.024524016 +0000 UTC m=+141.512755275" watchObservedRunningTime="2025-11-25 06:49:27.095103182 +0000 UTC m=+141.583334441" Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.136220 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:27 crc kubenswrapper[4482]: E1125 06:49:27.136610 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:27.63659915 +0000 UTC m=+142.124830410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.188055 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-6czb8" podStartSLOduration=122.18803647 podStartE2EDuration="2m2.18803647s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:27.178083276 +0000 UTC m=+141.666314536" watchObservedRunningTime="2025-11-25 06:49:27.18803647 +0000 UTC m=+141.676267729" Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.239868 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:27 crc kubenswrapper[4482]: E1125 06:49:27.240638 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:27.740619872 +0000 UTC m=+142.228851131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.327849 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ggws" Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.341341 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:27 crc kubenswrapper[4482]: E1125 06:49:27.341634 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:27.841623472 +0000 UTC m=+142.329854732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.344690 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-26zgh" podStartSLOduration=122.344664118 podStartE2EDuration="2m2.344664118s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:27.284726906 +0000 UTC m=+141.772958165" watchObservedRunningTime="2025-11-25 06:49:27.344664118 +0000 UTC m=+141.832895377" Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.408476 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4n6d5" podStartSLOduration=122.408459287 podStartE2EDuration="2m2.408459287s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:27.356584672 +0000 UTC m=+141.844815931" watchObservedRunningTime="2025-11-25 06:49:27.408459287 +0000 UTC m=+141.896690546" Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.409293 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hbqb4" podStartSLOduration=122.409286176 podStartE2EDuration="2m2.409286176s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:27.404129049 +0000 UTC m=+141.892360309" watchObservedRunningTime="2025-11-25 06:49:27.409286176 +0000 UTC m=+141.897517436" Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.441808 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:27 crc kubenswrapper[4482]: E1125 06:49:27.441969 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:27.941949625 +0000 UTC m=+142.430180884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.442106 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:27 crc kubenswrapper[4482]: E1125 06:49:27.442386 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:27.942375869 +0000 UTC m=+142.430607128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.460977 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" podStartSLOduration=122.460965393 podStartE2EDuration="2m2.460965393s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:27.437600021 +0000 UTC m=+141.925831280" watchObservedRunningTime="2025-11-25 06:49:27.460965393 +0000 UTC m=+141.949196651" Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.543261 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:27 crc kubenswrapper[4482]: E1125 06:49:27.543631 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.043619653 +0000 UTC m=+142.531850911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.644778 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:27 crc kubenswrapper[4482]: E1125 06:49:27.645122 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.145111986 +0000 UTC m=+142.633343245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.745465 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:27 crc kubenswrapper[4482]: E1125 06:49:27.745665 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.245630261 +0000 UTC m=+142.733861520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.745856 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:27 crc kubenswrapper[4482]: E1125 06:49:27.746211 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.246195828 +0000 UTC m=+142.734427086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.750422 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4kxk8" Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.846495 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:27 crc kubenswrapper[4482]: E1125 06:49:27.846823 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.346810215 +0000 UTC m=+142.835041474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.847662 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" event={"ID":"6735e099-a06c-4b53-8c17-c3f644d7ba91","Type":"ContainerStarted","Data":"45de7a8706a54d807b86662502f3de60f6fdea9be7503fa8150a0b347847d28e"} Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.847691 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" event={"ID":"6735e099-a06c-4b53-8c17-c3f644d7ba91","Type":"ContainerStarted","Data":"aa5dc696c479695ed0a65c3cfbfbee32a615ebd17625ba2e1219aee875ba7fb7"} Nov 25 06:49:27 crc kubenswrapper[4482]: I1125 06:49:27.948315 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:27 crc kubenswrapper[4482]: E1125 06:49:27.955866 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.455855242 +0000 UTC m=+142.944086501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.049131 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.049341 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.549307008 +0000 UTC m=+143.037538268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.049461 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.049751 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.549739775 +0000 UTC m=+143.037971024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.086281 4482 patch_prober.go:28] interesting pod/router-default-5444994796-6czb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 06:49:28 crc kubenswrapper[4482]: [-]has-synced failed: reason withheld Nov 25 06:49:28 crc kubenswrapper[4482]: [+]process-running ok Nov 25 06:49:28 crc kubenswrapper[4482]: healthz check failed Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.086329 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6czb8" podUID="d82a8d2c-46a2-4c77-b524-57c894fbc0a0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.150570 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.150673 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.650658095 +0000 UTC m=+143.138889354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.150857 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.151196 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.651188526 +0000 UTC m=+143.139419785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.252589 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.252723 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.752705083 +0000 UTC m=+143.240936343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.253460 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.253761 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.753751087 +0000 UTC m=+143.241982345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.355023 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.355217 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.855197723 +0000 UTC m=+143.343428983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.355580 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.356206 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.856188682 +0000 UTC m=+143.344419942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.456856 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.457050 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.95701593 +0000 UTC m=+143.445247189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.457472 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.457870 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:28.957856787 +0000 UTC m=+143.446088035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.559117 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.559316 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:29.059284096 +0000 UTC m=+143.547515355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.559407 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.559981 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:29.059958218 +0000 UTC m=+143.548189477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.607755 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5rzc2"] Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.608757 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.614249 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.624743 4482 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.637431 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5rzc2"] Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.660716 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.661208 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:29.161192113 +0000 UTC m=+143.649423371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.762627 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-utilities\") pod \"certified-operators-5rzc2\" (UID: \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\") " pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.762911 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v89m7\" (UniqueName: \"kubernetes.io/projected/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-kube-api-access-v89m7\") pod \"certified-operators-5rzc2\" (UID: \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\") " pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.762963 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.762992 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-catalog-content\") pod \"certified-operators-5rzc2\" (UID: \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\") " pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.763305 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:29.263294466 +0000 UTC m=+143.751525725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.780717 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rr27s"] Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.781555 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.784121 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.807569 4482 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6p2lq container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 25 06:49:28 crc kubenswrapper[4482]: [+]log ok Nov 25 06:49:28 crc kubenswrapper[4482]: [+]etcd ok Nov 25 06:49:28 crc kubenswrapper[4482]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 25 06:49:28 crc kubenswrapper[4482]: [+]poststarthook/generic-apiserver-start-informers ok Nov 25 06:49:28 crc kubenswrapper[4482]: [+]poststarthook/max-in-flight-filter ok Nov 25 06:49:28 crc kubenswrapper[4482]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 25 06:49:28 crc kubenswrapper[4482]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 25 06:49:28 crc kubenswrapper[4482]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 25 06:49:28 crc kubenswrapper[4482]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 25 06:49:28 crc kubenswrapper[4482]: [+]poststarthook/project.openshift.io-projectcache ok Nov 25 06:49:28 crc kubenswrapper[4482]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 25 06:49:28 crc kubenswrapper[4482]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Nov 25 06:49:28 crc kubenswrapper[4482]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 25 06:49:28 crc kubenswrapper[4482]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 25 06:49:28 crc kubenswrapper[4482]: livez check failed Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.807620 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" podUID="43f33231-2b25-4a54-87da-e93c8cf3ee18" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.853610 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" event={"ID":"6735e099-a06c-4b53-8c17-c3f644d7ba91","Type":"ContainerStarted","Data":"e6b7f11f7c66b80d76a0bc52ad34fc5706756f1b7c427567c301ed7f3d8e6814"} Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.858039 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rr27s"] Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.863368 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.863614 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgjl\" (UniqueName: \"kubernetes.io/projected/74a51867-1870-4ee4-bd5d-66ac6f1e3201-kube-api-access-mvgjl\") pod \"community-operators-rr27s\" (UID: \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\") " pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.863646 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a51867-1870-4ee4-bd5d-66ac6f1e3201-catalog-content\") pod \"community-operators-rr27s\" (UID: \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\") " pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.863677 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a51867-1870-4ee4-bd5d-66ac6f1e3201-utilities\") pod \"community-operators-rr27s\" (UID: \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\") " pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.863705 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-utilities\") pod \"certified-operators-5rzc2\" (UID: \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\") " pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.863741 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v89m7\" (UniqueName: \"kubernetes.io/projected/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-kube-api-access-v89m7\") pod \"certified-operators-5rzc2\" (UID: \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\") " pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.863783 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-catalog-content\") pod \"certified-operators-5rzc2\" (UID: \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\") " pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.864151 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-catalog-content\") pod \"certified-operators-5rzc2\" (UID: \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\") " pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.864379 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 06:49:29.364340366 +0000 UTC m=+143.852571625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.864392 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-utilities\") pod \"certified-operators-5rzc2\" (UID: \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\") " pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.888407 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v89m7\" (UniqueName: \"kubernetes.io/projected/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-kube-api-access-v89m7\") pod \"certified-operators-5rzc2\" (UID: \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\") " pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.926862 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gvbtp" podStartSLOduration=10.926836275 podStartE2EDuration="10.926836275s" podCreationTimestamp="2025-11-25 06:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:28.924070077 +0000 UTC m=+143.412301336" watchObservedRunningTime="2025-11-25 06:49:28.926836275 +0000 UTC m=+143.415067534" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.936453 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.967060 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a51867-1870-4ee4-bd5d-66ac6f1e3201-catalog-content\") pod \"community-operators-rr27s\" (UID: \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\") " pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.967339 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a51867-1870-4ee4-bd5d-66ac6f1e3201-utilities\") pod \"community-operators-rr27s\" (UID: \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\") " pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.967595 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.967963 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a51867-1870-4ee4-bd5d-66ac6f1e3201-catalog-content\") pod \"community-operators-rr27s\" (UID: \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\") " pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.968021 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a51867-1870-4ee4-bd5d-66ac6f1e3201-utilities\") pod \"community-operators-rr27s\" (UID: \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\") " pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:28 crc kubenswrapper[4482]: E1125 06:49:28.969024 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 06:49:29.469009762 +0000 UTC m=+143.957241021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fbpdk" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.978062 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvgjl\" (UniqueName: \"kubernetes.io/projected/74a51867-1870-4ee4-bd5d-66ac6f1e3201-kube-api-access-mvgjl\") pod \"community-operators-rr27s\" (UID: \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\") " pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.974880 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-knqzt"] Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.973615 4482 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-25T06:49:28.624765807Z","Handler":null,"Name":""} Nov 25 06:49:28 crc kubenswrapper[4482]: I1125 06:49:28.979409 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:28.999963 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-knqzt"] Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.008051 4482 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.008096 4482 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.012749 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvgjl\" (UniqueName: \"kubernetes.io/projected/74a51867-1870-4ee4-bd5d-66ac6f1e3201-kube-api-access-mvgjl\") pod \"community-operators-rr27s\" (UID: \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\") " pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.078900 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.079119 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f269d4-265d-4c80-be6c-cff0634e8f87-utilities\") pod \"certified-operators-knqzt\" (UID: \"e0f269d4-265d-4c80-be6c-cff0634e8f87\") " pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.079154 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f269d4-265d-4c80-be6c-cff0634e8f87-catalog-content\") pod \"certified-operators-knqzt\" (UID: \"e0f269d4-265d-4c80-be6c-cff0634e8f87\") " pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.079189 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hsm5\" (UniqueName: \"kubernetes.io/projected/e0f269d4-265d-4c80-be6c-cff0634e8f87-kube-api-access-2hsm5\") pod \"certified-operators-knqzt\" (UID: \"e0f269d4-265d-4c80-be6c-cff0634e8f87\") " pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.085651 4482 patch_prober.go:28] interesting pod/router-default-5444994796-6czb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 06:49:29 crc kubenswrapper[4482]: [-]has-synced failed: reason withheld Nov 25 06:49:29 crc kubenswrapper[4482]: [+]process-running ok Nov 25 06:49:29 crc kubenswrapper[4482]: healthz check failed Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.085688 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6czb8" podUID="d82a8d2c-46a2-4c77-b524-57c894fbc0a0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.093708 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.169490 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.180564 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.180693 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f269d4-265d-4c80-be6c-cff0634e8f87-utilities\") pod \"certified-operators-knqzt\" (UID: \"e0f269d4-265d-4c80-be6c-cff0634e8f87\") " pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.180721 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f269d4-265d-4c80-be6c-cff0634e8f87-catalog-content\") pod \"certified-operators-knqzt\" (UID: \"e0f269d4-265d-4c80-be6c-cff0634e8f87\") " pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.180737 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hsm5\" (UniqueName: \"kubernetes.io/projected/e0f269d4-265d-4c80-be6c-cff0634e8f87-kube-api-access-2hsm5\") pod \"certified-operators-knqzt\" (UID: \"e0f269d4-265d-4c80-be6c-cff0634e8f87\") " pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.181471 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f269d4-265d-4c80-be6c-cff0634e8f87-utilities\") pod \"certified-operators-knqzt\" (UID: \"e0f269d4-265d-4c80-be6c-cff0634e8f87\") " pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.181674 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f269d4-265d-4c80-be6c-cff0634e8f87-catalog-content\") pod \"certified-operators-knqzt\" (UID: \"e0f269d4-265d-4c80-be6c-cff0634e8f87\") " pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.195792 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fwlcs"] Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.196627 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.208863 4482 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.208903 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.209957 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hsm5\" (UniqueName: \"kubernetes.io/projected/e0f269d4-265d-4c80-be6c-cff0634e8f87-kube-api-access-2hsm5\") pod \"certified-operators-knqzt\" (UID: \"e0f269d4-265d-4c80-be6c-cff0634e8f87\") " pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.223682 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwlcs"] Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.282001 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-catalog-content\") pod \"community-operators-fwlcs\" (UID: \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\") " pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.282068 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-utilities\") pod \"community-operators-fwlcs\" (UID: \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\") " pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.282141 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj5zs\" (UniqueName: \"kubernetes.io/projected/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-kube-api-access-qj5zs\") pod \"community-operators-fwlcs\" (UID: \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\") " pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.309122 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.384919 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-catalog-content\") pod \"community-operators-fwlcs\" (UID: \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\") " pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.385010 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-utilities\") pod \"community-operators-fwlcs\" (UID: \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\") " pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.385086 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj5zs\" (UniqueName: \"kubernetes.io/projected/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-kube-api-access-qj5zs\") pod \"community-operators-fwlcs\" (UID: \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\") " pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.387489 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-catalog-content\") pod \"community-operators-fwlcs\" (UID: \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\") " pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.387705 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-utilities\") pod \"community-operators-fwlcs\" (UID: \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\") " pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.419648 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fbpdk\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.424029 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj5zs\" (UniqueName: \"kubernetes.io/projected/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-kube-api-access-qj5zs\") pod \"community-operators-fwlcs\" (UID: \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\") " pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.447884 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5rzc2"] Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.463557 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.544304 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.603550 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rr27s"] Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.852224 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.861598 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fbpdk"] Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.873105 4482 generic.go:334] "Generic (PLEG): container finished" podID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" containerID="6965b666d02688c9dc593712d60580ef3e94fd94aa2006dd99cec5617ccb85fa" exitCode=0 Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.873179 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rzc2" event={"ID":"36a33d74-c23f-405e-a3c5-6f5a4de71e7a","Type":"ContainerDied","Data":"6965b666d02688c9dc593712d60580ef3e94fd94aa2006dd99cec5617ccb85fa"} Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.873210 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rzc2" event={"ID":"36a33d74-c23f-405e-a3c5-6f5a4de71e7a","Type":"ContainerStarted","Data":"7707eafeca8af3fd8d7a7c0761c6c7e05071a66b2059482280654d0a198393ac"} Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.874900 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 06:49:29 crc kubenswrapper[4482]: I1125 06:49:29.894748 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr27s" event={"ID":"74a51867-1870-4ee4-bd5d-66ac6f1e3201","Type":"ContainerStarted","Data":"1073782d991dd358e8a0769544815e1bd787b8f557b02b93986279898df10f17"} Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.015142 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwlcs"] Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.090728 4482 patch_prober.go:28] interesting pod/router-default-5444994796-6czb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 06:49:30 crc kubenswrapper[4482]: [-]has-synced failed: reason withheld Nov 25 06:49:30 crc kubenswrapper[4482]: [+]process-running ok Nov 25 06:49:30 crc kubenswrapper[4482]: healthz check failed Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.090955 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6czb8" podUID="d82a8d2c-46a2-4c77-b524-57c894fbc0a0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.241697 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-knqzt"] Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.572542 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qk2s9"] Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.573899 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.578023 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.634374 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qk2s9"] Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.710262 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-utilities\") pod \"redhat-marketplace-qk2s9\" (UID: \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\") " pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.710352 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-catalog-content\") pod \"redhat-marketplace-qk2s9\" (UID: \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\") " pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.710582 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dm5z\" (UniqueName: \"kubernetes.io/projected/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-kube-api-access-7dm5z\") pod \"redhat-marketplace-qk2s9\" (UID: \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\") " pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.812522 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-catalog-content\") pod \"redhat-marketplace-qk2s9\" (UID: \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\") " pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.812586 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dm5z\" (UniqueName: \"kubernetes.io/projected/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-kube-api-access-7dm5z\") pod \"redhat-marketplace-qk2s9\" (UID: \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\") " pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.812656 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-utilities\") pod \"redhat-marketplace-qk2s9\" (UID: \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\") " pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.813545 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-catalog-content\") pod \"redhat-marketplace-qk2s9\" (UID: \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\") " pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.813744 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-utilities\") pod \"redhat-marketplace-qk2s9\" (UID: \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\") " pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.832362 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dm5z\" (UniqueName: \"kubernetes.io/projected/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-kube-api-access-7dm5z\") pod \"redhat-marketplace-qk2s9\" (UID: \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\") " pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.889000 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.907531 4482 patch_prober.go:28] interesting pod/downloads-7954f5f757-78b9v container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.907579 4482 patch_prober.go:28] interesting pod/downloads-7954f5f757-78b9v container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.907631 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-78b9v" podUID="13c2044e-5435-4487-be5b-fafa43b6db3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.907578 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-78b9v" podUID="13c2044e-5435-4487-be5b-fafa43b6db3a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.919880 4482 generic.go:334] "Generic (PLEG): container finished" podID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" containerID="8cf49fb90bfc8d3b6d0abd1d00de80b0b81bf5706490e3e659b76eae565c3245" exitCode=0 Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.920256 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr27s" event={"ID":"74a51867-1870-4ee4-bd5d-66ac6f1e3201","Type":"ContainerDied","Data":"8cf49fb90bfc8d3b6d0abd1d00de80b0b81bf5706490e3e659b76eae565c3245"} Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.929277 4482 generic.go:334] "Generic (PLEG): container finished" podID="e0f269d4-265d-4c80-be6c-cff0634e8f87" containerID="a3e398f1b50dad34e4ab51f92f513c9b0564b31bbf34717d69a8061b14641f3b" exitCode=0 Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.929410 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knqzt" event={"ID":"e0f269d4-265d-4c80-be6c-cff0634e8f87","Type":"ContainerDied","Data":"a3e398f1b50dad34e4ab51f92f513c9b0564b31bbf34717d69a8061b14641f3b"} Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.929463 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knqzt" event={"ID":"e0f269d4-265d-4c80-be6c-cff0634e8f87","Type":"ContainerStarted","Data":"bc9f0168dcaf7f325308f54106929b2a96ec3edf986b0537dbd0558ea449299f"} Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.947476 4482 generic.go:334] "Generic (PLEG): container finished" podID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" containerID="4798d634ba8cb8012918b5defade64256b2c0ed7b8a0039f08b70cbee2d1f54d" exitCode=0 Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.947552 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwlcs" event={"ID":"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7","Type":"ContainerDied","Data":"4798d634ba8cb8012918b5defade64256b2c0ed7b8a0039f08b70cbee2d1f54d"} Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.947579 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwlcs" event={"ID":"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7","Type":"ContainerStarted","Data":"8fa8fa6bf2012e20939cb13907578e0b9b9f384951e7387607e32a35dc8dd529"} Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.955731 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" event={"ID":"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146","Type":"ContainerStarted","Data":"ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91"} Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.955791 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" event={"ID":"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146","Type":"ContainerStarted","Data":"878db53dba431b2009cb369155f64a1b63806227297ee2f4e5d41e640fc64cc2"} Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.956347 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.967677 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkkv"] Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.968741 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:49:30 crc kubenswrapper[4482]: I1125 06:49:30.987052 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkkv"] Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.005857 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.009720 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" podStartSLOduration=126.009702488 podStartE2EDuration="2m6.009702488s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:31.00149586 +0000 UTC m=+145.489727118" watchObservedRunningTime="2025-11-25 06:49:31.009702488 +0000 UTC m=+145.497933748" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.018639 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-6p2lq" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.086640 4482 patch_prober.go:28] interesting pod/router-default-5444994796-6czb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 06:49:31 crc kubenswrapper[4482]: [-]has-synced failed: reason withheld Nov 25 06:49:31 crc kubenswrapper[4482]: [+]process-running ok Nov 25 06:49:31 crc kubenswrapper[4482]: healthz check failed Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.086720 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6czb8" podUID="d82a8d2c-46a2-4c77-b524-57c894fbc0a0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.124208 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad24fc25-dae5-4720-81b3-0960ee86d505-catalog-content\") pod \"redhat-marketplace-4vkkv\" (UID: \"ad24fc25-dae5-4720-81b3-0960ee86d505\") " pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.124290 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad24fc25-dae5-4720-81b3-0960ee86d505-utilities\") pod \"redhat-marketplace-4vkkv\" (UID: \"ad24fc25-dae5-4720-81b3-0960ee86d505\") " pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.124431 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxwks\" (UniqueName: \"kubernetes.io/projected/ad24fc25-dae5-4720-81b3-0960ee86d505-kube-api-access-vxwks\") pod \"redhat-marketplace-4vkkv\" (UID: \"ad24fc25-dae5-4720-81b3-0960ee86d505\") " pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.200479 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.200514 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.208395 4482 patch_prober.go:28] interesting pod/console-f9d7485db-gqc49 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.208460 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gqc49" podUID="368e9f64-0e31-464e-9714-b4b3ea73cc36" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.225242 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxwks\" (UniqueName: \"kubernetes.io/projected/ad24fc25-dae5-4720-81b3-0960ee86d505-kube-api-access-vxwks\") pod \"redhat-marketplace-4vkkv\" (UID: \"ad24fc25-dae5-4720-81b3-0960ee86d505\") " pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.225368 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad24fc25-dae5-4720-81b3-0960ee86d505-catalog-content\") pod \"redhat-marketplace-4vkkv\" (UID: \"ad24fc25-dae5-4720-81b3-0960ee86d505\") " pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.225397 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad24fc25-dae5-4720-81b3-0960ee86d505-utilities\") pod \"redhat-marketplace-4vkkv\" (UID: \"ad24fc25-dae5-4720-81b3-0960ee86d505\") " pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.225863 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad24fc25-dae5-4720-81b3-0960ee86d505-utilities\") pod \"redhat-marketplace-4vkkv\" (UID: \"ad24fc25-dae5-4720-81b3-0960ee86d505\") " pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.226607 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad24fc25-dae5-4720-81b3-0960ee86d505-catalog-content\") pod \"redhat-marketplace-4vkkv\" (UID: \"ad24fc25-dae5-4720-81b3-0960ee86d505\") " pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.250929 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxwks\" (UniqueName: \"kubernetes.io/projected/ad24fc25-dae5-4720-81b3-0960ee86d505-kube-api-access-vxwks\") pod \"redhat-marketplace-4vkkv\" (UID: \"ad24fc25-dae5-4720-81b3-0960ee86d505\") " pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.283891 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qk2s9"] Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.292132 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:49:31 crc kubenswrapper[4482]: W1125 06:49:31.294818 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f447b1e_5bd0_49f1_9bbd_5277552dbba3.slice/crio-a53ffee34b42c415f2a825660b8c9f32fe750e1657c053d815e6d5774852733c WatchSource:0}: Error finding container a53ffee34b42c415f2a825660b8c9f32fe750e1657c053d815e6d5774852733c: Status 404 returned error can't find the container with id a53ffee34b42c415f2a825660b8c9f32fe750e1657c053d815e6d5774852733c Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.338947 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.339726 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.341474 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.347015 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.347197 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.429327 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37beea46-1843-4974-9dab-e2052f6d80b1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"37beea46-1843-4974-9dab-e2052f6d80b1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.429670 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37beea46-1843-4974-9dab-e2052f6d80b1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"37beea46-1843-4974-9dab-e2052f6d80b1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.441337 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.441383 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.450127 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.516628 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.531022 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37beea46-1843-4974-9dab-e2052f6d80b1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"37beea46-1843-4974-9dab-e2052f6d80b1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.531301 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37beea46-1843-4974-9dab-e2052f6d80b1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"37beea46-1843-4974-9dab-e2052f6d80b1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.531643 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37beea46-1843-4974-9dab-e2052f6d80b1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"37beea46-1843-4974-9dab-e2052f6d80b1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.551301 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37beea46-1843-4974-9dab-e2052f6d80b1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"37beea46-1843-4974-9dab-e2052f6d80b1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.595211 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkkv"] Nov 25 06:49:31 crc kubenswrapper[4482]: W1125 06:49:31.617404 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad24fc25_dae5_4720_81b3_0960ee86d505.slice/crio-26f3ee6155f3ccfc21955939e113a9020bc4401697b242a1602dfe9f0e5518dd WatchSource:0}: Error finding container 26f3ee6155f3ccfc21955939e113a9020bc4401697b242a1602dfe9f0e5518dd: Status 404 returned error can't find the container with id 26f3ee6155f3ccfc21955939e113a9020bc4401697b242a1602dfe9f0e5518dd Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.657056 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.838157 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.839150 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.839193 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.839255 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.840412 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.844356 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.844744 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.845027 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.886671 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.944421 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.946680 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.951385 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.970881 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9nkrg"] Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.971797 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.973017 4482 generic.go:334] "Generic (PLEG): container finished" podID="0f447b1e-5bd0-49f1-9bbd-5277552dbba3" containerID="61b229dabdbe0fc493bf5eb104f7d233ded40cb3877425a2c982f5e8b2d00917" exitCode=0 Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.973069 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qk2s9" event={"ID":"0f447b1e-5bd0-49f1-9bbd-5277552dbba3","Type":"ContainerDied","Data":"61b229dabdbe0fc493bf5eb104f7d233ded40cb3877425a2c982f5e8b2d00917"} Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.973088 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qk2s9" event={"ID":"0f447b1e-5bd0-49f1-9bbd-5277552dbba3","Type":"ContainerStarted","Data":"a53ffee34b42c415f2a825660b8c9f32fe750e1657c053d815e6d5774852733c"} Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.975015 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.975061 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nkrg"] Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.979325 4482 generic.go:334] "Generic (PLEG): container finished" podID="9ff92469-ca47-4359-b56a-8df7332739ab" containerID="15fbe8f652383d0e7eda94bc0e38826dbb0cd557ed7d2c674bd037ed6e133196" exitCode=0 Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.979386 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" event={"ID":"9ff92469-ca47-4359-b56a-8df7332739ab","Type":"ContainerDied","Data":"15fbe8f652383d0e7eda94bc0e38826dbb0cd557ed7d2c674bd037ed6e133196"} Nov 25 06:49:31 crc kubenswrapper[4482]: I1125 06:49:31.989683 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"37beea46-1843-4974-9dab-e2052f6d80b1","Type":"ContainerStarted","Data":"e22c2966797af063dd024e3279a26044dae8fca32de1654a0460e64cee22f2b5"} Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.013496 4482 generic.go:334] "Generic (PLEG): container finished" podID="ad24fc25-dae5-4720-81b3-0960ee86d505" containerID="5a82a987133ea3ce5962b48a4b6abd573e82db1b076655ac77fc017b1a624eb2" exitCode=0 Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.013601 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkkv" event={"ID":"ad24fc25-dae5-4720-81b3-0960ee86d505","Type":"ContainerDied","Data":"5a82a987133ea3ce5962b48a4b6abd573e82db1b076655ac77fc017b1a624eb2"} Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.013630 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkkv" event={"ID":"ad24fc25-dae5-4720-81b3-0960ee86d505","Type":"ContainerStarted","Data":"26f3ee6155f3ccfc21955939e113a9020bc4401697b242a1602dfe9f0e5518dd"} Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.023511 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5g2wl" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.047824 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-catalog-content\") pod \"redhat-operators-9nkrg\" (UID: \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\") " pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.050632 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-utilities\") pod \"redhat-operators-9nkrg\" (UID: \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\") " pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.051560 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qcv6\" (UniqueName: \"kubernetes.io/projected/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-kube-api-access-7qcv6\") pod \"redhat-operators-9nkrg\" (UID: \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\") " pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.088328 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.098747 4482 patch_prober.go:28] interesting pod/router-default-5444994796-6czb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 06:49:32 crc kubenswrapper[4482]: [-]has-synced failed: reason withheld Nov 25 06:49:32 crc kubenswrapper[4482]: [+]process-running ok Nov 25 06:49:32 crc kubenswrapper[4482]: healthz check failed Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.098967 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6czb8" podUID="d82a8d2c-46a2-4c77-b524-57c894fbc0a0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.164192 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-catalog-content\") pod \"redhat-operators-9nkrg\" (UID: \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\") " pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.164349 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-utilities\") pod \"redhat-operators-9nkrg\" (UID: \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\") " pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.164479 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qcv6\" (UniqueName: \"kubernetes.io/projected/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-kube-api-access-7qcv6\") pod \"redhat-operators-9nkrg\" (UID: \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\") " pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.166154 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-catalog-content\") pod \"redhat-operators-9nkrg\" (UID: \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\") " pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.169348 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-utilities\") pod \"redhat-operators-9nkrg\" (UID: \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\") " pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.220131 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qcv6\" (UniqueName: \"kubernetes.io/projected/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-kube-api-access-7qcv6\") pod \"redhat-operators-9nkrg\" (UID: \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\") " pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.298388 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.382403 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s9sfj"] Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.383816 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.417874 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s9sfj"] Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.473494 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q52xk\" (UniqueName: \"kubernetes.io/projected/ec39f8a8-f28c-488a-8f02-e6c122084ddc-kube-api-access-q52xk\") pod \"redhat-operators-s9sfj\" (UID: \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\") " pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.473578 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec39f8a8-f28c-488a-8f02-e6c122084ddc-catalog-content\") pod \"redhat-operators-s9sfj\" (UID: \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\") " pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.473626 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec39f8a8-f28c-488a-8f02-e6c122084ddc-utilities\") pod \"redhat-operators-s9sfj\" (UID: \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\") " pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.574779 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q52xk\" (UniqueName: \"kubernetes.io/projected/ec39f8a8-f28c-488a-8f02-e6c122084ddc-kube-api-access-q52xk\") pod \"redhat-operators-s9sfj\" (UID: \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\") " pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.575001 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec39f8a8-f28c-488a-8f02-e6c122084ddc-catalog-content\") pod \"redhat-operators-s9sfj\" (UID: \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\") " pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.575030 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec39f8a8-f28c-488a-8f02-e6c122084ddc-utilities\") pod \"redhat-operators-s9sfj\" (UID: \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\") " pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.575467 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec39f8a8-f28c-488a-8f02-e6c122084ddc-utilities\") pod \"redhat-operators-s9sfj\" (UID: \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\") " pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.576024 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec39f8a8-f28c-488a-8f02-e6c122084ddc-catalog-content\") pod \"redhat-operators-s9sfj\" (UID: \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\") " pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.597425 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q52xk\" (UniqueName: \"kubernetes.io/projected/ec39f8a8-f28c-488a-8f02-e6c122084ddc-kube-api-access-q52xk\") pod \"redhat-operators-s9sfj\" (UID: \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\") " pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.701231 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.817656 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.818449 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.822582 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.823138 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.838660 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.880087 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9326ccfa-b7f4-4e47-879b-5379fbef0702-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9326ccfa-b7f4-4e47-879b-5379fbef0702\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.880365 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9326ccfa-b7f4-4e47-879b-5379fbef0702-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9326ccfa-b7f4-4e47-879b-5379fbef0702\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.984627 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9326ccfa-b7f4-4e47-879b-5379fbef0702-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9326ccfa-b7f4-4e47-879b-5379fbef0702\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.984730 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9326ccfa-b7f4-4e47-879b-5379fbef0702-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9326ccfa-b7f4-4e47-879b-5379fbef0702\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 06:49:32 crc kubenswrapper[4482]: I1125 06:49:32.984749 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9326ccfa-b7f4-4e47-879b-5379fbef0702-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9326ccfa-b7f4-4e47-879b-5379fbef0702\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 06:49:33 crc kubenswrapper[4482]: W1125 06:49:33.002791 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-5290a8cf84790c0a381b8f7737f7aeeb8ba4e54703c44d6b4cb9def7319fe617 WatchSource:0}: Error finding container 5290a8cf84790c0a381b8f7737f7aeeb8ba4e54703c44d6b4cb9def7319fe617: Status 404 returned error can't find the container with id 5290a8cf84790c0a381b8f7737f7aeeb8ba4e54703c44d6b4cb9def7319fe617 Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.006134 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9326ccfa-b7f4-4e47-879b-5379fbef0702-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9326ccfa-b7f4-4e47-879b-5379fbef0702\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.068738 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"37beea46-1843-4974-9dab-e2052f6d80b1","Type":"ContainerStarted","Data":"677756a213c2061dcf5f50864638e6e2e009483ef7a2ad1070661e5c95e067ff"} Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.077081 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5290a8cf84790c0a381b8f7737f7aeeb8ba4e54703c44d6b4cb9def7319fe617"} Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.078597 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"40d32da096fb665ca4d74f9f5809ca66dc72bc2fe354505dcd1251bfb32578ec"} Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.088018 4482 patch_prober.go:28] interesting pod/router-default-5444994796-6czb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 06:49:33 crc kubenswrapper[4482]: [-]has-synced failed: reason withheld Nov 25 06:49:33 crc kubenswrapper[4482]: [+]process-running ok Nov 25 06:49:33 crc kubenswrapper[4482]: healthz check failed Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.088057 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6czb8" podUID="d82a8d2c-46a2-4c77-b524-57c894fbc0a0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.135498 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.138806 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nkrg"] Nov 25 06:49:33 crc kubenswrapper[4482]: W1125 06:49:33.201581 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7388949f_6c3e_4c11_96b6_b8a7c6ed5765.slice/crio-ba5b0c4ded9d9b8535ae475fede276c3bf7caaf8fdde04987b082445a61e3013 WatchSource:0}: Error finding container ba5b0c4ded9d9b8535ae475fede276c3bf7caaf8fdde04987b082445a61e3013: Status 404 returned error can't find the container with id ba5b0c4ded9d9b8535ae475fede276c3bf7caaf8fdde04987b082445a61e3013 Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.250031 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s9sfj"] Nov 25 06:49:33 crc kubenswrapper[4482]: W1125 06:49:33.280142 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec39f8a8_f28c_488a_8f02_e6c122084ddc.slice/crio-e9b832a34966f2aae88c480cc66469632e5941ae70f7bab49858453e580279dc WatchSource:0}: Error finding container e9b832a34966f2aae88c480cc66469632e5941ae70f7bab49858453e580279dc: Status 404 returned error can't find the container with id e9b832a34966f2aae88c480cc66469632e5941ae70f7bab49858453e580279dc Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.538928 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.593195 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ff92469-ca47-4359-b56a-8df7332739ab-config-volume\") pod \"9ff92469-ca47-4359-b56a-8df7332739ab\" (UID: \"9ff92469-ca47-4359-b56a-8df7332739ab\") " Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.593293 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ff92469-ca47-4359-b56a-8df7332739ab-secret-volume\") pod \"9ff92469-ca47-4359-b56a-8df7332739ab\" (UID: \"9ff92469-ca47-4359-b56a-8df7332739ab\") " Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.593375 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z72rs\" (UniqueName: \"kubernetes.io/projected/9ff92469-ca47-4359-b56a-8df7332739ab-kube-api-access-z72rs\") pod \"9ff92469-ca47-4359-b56a-8df7332739ab\" (UID: \"9ff92469-ca47-4359-b56a-8df7332739ab\") " Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.595081 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff92469-ca47-4359-b56a-8df7332739ab-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ff92469-ca47-4359-b56a-8df7332739ab" (UID: "9ff92469-ca47-4359-b56a-8df7332739ab"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.600362 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff92469-ca47-4359-b56a-8df7332739ab-kube-api-access-z72rs" (OuterVolumeSpecName: "kube-api-access-z72rs") pod "9ff92469-ca47-4359-b56a-8df7332739ab" (UID: "9ff92469-ca47-4359-b56a-8df7332739ab"). InnerVolumeSpecName "kube-api-access-z72rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.607406 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff92469-ca47-4359-b56a-8df7332739ab-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ff92469-ca47-4359-b56a-8df7332739ab" (UID: "9ff92469-ca47-4359-b56a-8df7332739ab"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.695404 4482 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ff92469-ca47-4359-b56a-8df7332739ab-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.695449 4482 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ff92469-ca47-4359-b56a-8df7332739ab-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.695460 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z72rs\" (UniqueName: \"kubernetes.io/projected/9ff92469-ca47-4359-b56a-8df7332739ab-kube-api-access-z72rs\") on node \"crc\" DevicePath \"\"" Nov 25 06:49:33 crc kubenswrapper[4482]: I1125 06:49:33.814834 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.085529 4482 patch_prober.go:28] interesting pod/router-default-5444994796-6czb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 06:49:34 crc kubenswrapper[4482]: [-]has-synced failed: reason withheld Nov 25 06:49:34 crc kubenswrapper[4482]: [+]process-running ok Nov 25 06:49:34 crc kubenswrapper[4482]: healthz check failed Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.085900 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6czb8" podUID="d82a8d2c-46a2-4c77-b524-57c894fbc0a0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.106344 4482 generic.go:334] "Generic (PLEG): container finished" podID="37beea46-1843-4974-9dab-e2052f6d80b1" containerID="677756a213c2061dcf5f50864638e6e2e009483ef7a2ad1070661e5c95e067ff" exitCode=0 Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.106409 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"37beea46-1843-4974-9dab-e2052f6d80b1","Type":"ContainerDied","Data":"677756a213c2061dcf5f50864638e6e2e009483ef7a2ad1070661e5c95e067ff"} Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.113735 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" event={"ID":"9ff92469-ca47-4359-b56a-8df7332739ab","Type":"ContainerDied","Data":"950bddd38864b361c536524818498b89cd4663f803629fc794d1803f37e7c730"} Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.113763 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr" Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.113772 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="950bddd38864b361c536524818498b89cd4663f803629fc794d1803f37e7c730" Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.120988 4482 generic.go:334] "Generic (PLEG): container finished" podID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" containerID="16340898b99bf5f0c3077592ec35159ef687970d5a48058739310ab2a5b012a9" exitCode=0 Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.121350 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sfj" event={"ID":"ec39f8a8-f28c-488a-8f02-e6c122084ddc","Type":"ContainerDied","Data":"16340898b99bf5f0c3077592ec35159ef687970d5a48058739310ab2a5b012a9"} Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.121405 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sfj" event={"ID":"ec39f8a8-f28c-488a-8f02-e6c122084ddc","Type":"ContainerStarted","Data":"e9b832a34966f2aae88c480cc66469632e5941ae70f7bab49858453e580279dc"} Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.129597 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9326ccfa-b7f4-4e47-879b-5379fbef0702","Type":"ContainerStarted","Data":"8bb98f44c4a0a178b132ff8e1e682b1f2cc30357b0427f4ce4edf9115a1b3720"} Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.136060 4482 generic.go:334] "Generic (PLEG): container finished" podID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" containerID="c986f019340a91630609d5525b902b73f2b606ad7bab3a8c9ed2d482d3bb5288" exitCode=0 Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.136249 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nkrg" event={"ID":"7388949f-6c3e-4c11-96b6-b8a7c6ed5765","Type":"ContainerDied","Data":"c986f019340a91630609d5525b902b73f2b606ad7bab3a8c9ed2d482d3bb5288"} Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.136360 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nkrg" event={"ID":"7388949f-6c3e-4c11-96b6-b8a7c6ed5765","Type":"ContainerStarted","Data":"ba5b0c4ded9d9b8535ae475fede276c3bf7caaf8fdde04987b082445a61e3013"} Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.143743 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ea83aabd54e91338fa54090d19f529f26ce89ae143f12d6015b7fdf7cbcf449e"} Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.143767 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a8020045284e82f4717d2a9efaf27dd1111aaa3055ed0bf1abce6d96d37d4228"} Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.156445 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3afae770e648b1db3ccc641bcb5dc519ab730a44559623ae938f463e1c2d15c6"} Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.156696 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.200841 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fc14beac43e69ea7f4ab3cf8e456ccf158b318b3e4101cbdfe1955145b5d4685"} Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.432885 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.507142 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37beea46-1843-4974-9dab-e2052f6d80b1-kubelet-dir\") pod \"37beea46-1843-4974-9dab-e2052f6d80b1\" (UID: \"37beea46-1843-4974-9dab-e2052f6d80b1\") " Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.507218 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37beea46-1843-4974-9dab-e2052f6d80b1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "37beea46-1843-4974-9dab-e2052f6d80b1" (UID: "37beea46-1843-4974-9dab-e2052f6d80b1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.507459 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37beea46-1843-4974-9dab-e2052f6d80b1-kube-api-access\") pod \"37beea46-1843-4974-9dab-e2052f6d80b1\" (UID: \"37beea46-1843-4974-9dab-e2052f6d80b1\") " Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.507914 4482 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37beea46-1843-4974-9dab-e2052f6d80b1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.516916 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37beea46-1843-4974-9dab-e2052f6d80b1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "37beea46-1843-4974-9dab-e2052f6d80b1" (UID: "37beea46-1843-4974-9dab-e2052f6d80b1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:49:34 crc kubenswrapper[4482]: I1125 06:49:34.609267 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37beea46-1843-4974-9dab-e2052f6d80b1-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 06:49:35 crc kubenswrapper[4482]: I1125 06:49:35.086163 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:35 crc kubenswrapper[4482]: I1125 06:49:35.088893 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-6czb8" Nov 25 06:49:35 crc kubenswrapper[4482]: I1125 06:49:35.212838 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 06:49:35 crc kubenswrapper[4482]: I1125 06:49:35.212988 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"37beea46-1843-4974-9dab-e2052f6d80b1","Type":"ContainerDied","Data":"e22c2966797af063dd024e3279a26044dae8fca32de1654a0460e64cee22f2b5"} Nov 25 06:49:35 crc kubenswrapper[4482]: I1125 06:49:35.213031 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22c2966797af063dd024e3279a26044dae8fca32de1654a0460e64cee22f2b5" Nov 25 06:49:35 crc kubenswrapper[4482]: I1125 06:49:35.216022 4482 generic.go:334] "Generic (PLEG): container finished" podID="9326ccfa-b7f4-4e47-879b-5379fbef0702" containerID="4cf1bc2effdcf98fd2f4dcbee3b2bc842b12fddc4b0146913392dfa15a317c74" exitCode=0 Nov 25 06:49:35 crc kubenswrapper[4482]: I1125 06:49:35.216062 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9326ccfa-b7f4-4e47-879b-5379fbef0702","Type":"ContainerDied","Data":"4cf1bc2effdcf98fd2f4dcbee3b2bc842b12fddc4b0146913392dfa15a317c74"} Nov 25 06:49:36 crc kubenswrapper[4482]: I1125 06:49:36.489709 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 06:49:36 crc kubenswrapper[4482]: I1125 06:49:36.540150 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gp4k7" Nov 25 06:49:36 crc kubenswrapper[4482]: I1125 06:49:36.564300 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9326ccfa-b7f4-4e47-879b-5379fbef0702-kubelet-dir\") pod \"9326ccfa-b7f4-4e47-879b-5379fbef0702\" (UID: \"9326ccfa-b7f4-4e47-879b-5379fbef0702\") " Nov 25 06:49:36 crc kubenswrapper[4482]: I1125 06:49:36.564392 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9326ccfa-b7f4-4e47-879b-5379fbef0702-kube-api-access\") pod \"9326ccfa-b7f4-4e47-879b-5379fbef0702\" (UID: \"9326ccfa-b7f4-4e47-879b-5379fbef0702\") " Nov 25 06:49:36 crc kubenswrapper[4482]: I1125 06:49:36.564469 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9326ccfa-b7f4-4e47-879b-5379fbef0702-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9326ccfa-b7f4-4e47-879b-5379fbef0702" (UID: "9326ccfa-b7f4-4e47-879b-5379fbef0702"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:49:36 crc kubenswrapper[4482]: I1125 06:49:36.564984 4482 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9326ccfa-b7f4-4e47-879b-5379fbef0702-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 06:49:36 crc kubenswrapper[4482]: I1125 06:49:36.574927 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9326ccfa-b7f4-4e47-879b-5379fbef0702-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9326ccfa-b7f4-4e47-879b-5379fbef0702" (UID: "9326ccfa-b7f4-4e47-879b-5379fbef0702"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:49:36 crc kubenswrapper[4482]: I1125 06:49:36.668519 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9326ccfa-b7f4-4e47-879b-5379fbef0702-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 06:49:37 crc kubenswrapper[4482]: I1125 06:49:37.276074 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9326ccfa-b7f4-4e47-879b-5379fbef0702","Type":"ContainerDied","Data":"8bb98f44c4a0a178b132ff8e1e682b1f2cc30357b0427f4ce4edf9115a1b3720"} Nov 25 06:49:37 crc kubenswrapper[4482]: I1125 06:49:37.276379 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb98f44c4a0a178b132ff8e1e682b1f2cc30357b0427f4ce4edf9115a1b3720" Nov 25 06:49:37 crc kubenswrapper[4482]: I1125 06:49:37.276297 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 06:49:39 crc kubenswrapper[4482]: I1125 06:49:39.117692 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 06:49:39 crc kubenswrapper[4482]: I1125 06:49:39.118019 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 06:49:40 crc kubenswrapper[4482]: I1125 06:49:40.908583 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-78b9v" Nov 25 06:49:41 crc kubenswrapper[4482]: I1125 06:49:41.206568 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:41 crc kubenswrapper[4482]: I1125 06:49:41.211946 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:49:46 crc kubenswrapper[4482]: I1125 06:49:46.839581 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:46 crc kubenswrapper[4482]: I1125 06:49:46.845770 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a1c9846-2a7e-402e-985f-51a244241bd7-metrics-certs\") pod \"network-metrics-daemon-2xhh4\" (UID: \"0a1c9846-2a7e-402e-985f-51a244241bd7\") " pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:46 crc kubenswrapper[4482]: I1125 06:49:46.956011 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xhh4" Nov 25 06:49:49 crc kubenswrapper[4482]: I1125 06:49:49.468443 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.165196 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2xhh4"] Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.436899 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" event={"ID":"0a1c9846-2a7e-402e-985f-51a244241bd7","Type":"ContainerStarted","Data":"d399e4b33e4b4e718cdc15567e358fb240c1b28361dea1ff1486fac24254f944"} Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.438966 4482 generic.go:334] "Generic (PLEG): container finished" podID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" containerID="2537509c5cfcded5573f541c2a22ad766b6662b214ef38752de74b6d72147abb" exitCode=0 Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.439049 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwlcs" event={"ID":"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7","Type":"ContainerDied","Data":"2537509c5cfcded5573f541c2a22ad766b6662b214ef38752de74b6d72147abb"} Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.441916 4482 generic.go:334] "Generic (PLEG): container finished" podID="ad24fc25-dae5-4720-81b3-0960ee86d505" containerID="41108f0aaa2899ece0e375e5a95caa435a1921f4816213de10ef9725368767ad" exitCode=0 Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.441978 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkkv" event={"ID":"ad24fc25-dae5-4720-81b3-0960ee86d505","Type":"ContainerDied","Data":"41108f0aaa2899ece0e375e5a95caa435a1921f4816213de10ef9725368767ad"} Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.446812 4482 generic.go:334] "Generic (PLEG): container finished" podID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" containerID="a33c9b014f9b238f9f0389ec1d64deaafcb9e1d930b286099ab93c0da1782ffb" exitCode=0 Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.446874 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr27s" event={"ID":"74a51867-1870-4ee4-bd5d-66ac6f1e3201","Type":"ContainerDied","Data":"a33c9b014f9b238f9f0389ec1d64deaafcb9e1d930b286099ab93c0da1782ffb"} Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.458486 4482 generic.go:334] "Generic (PLEG): container finished" podID="0f447b1e-5bd0-49f1-9bbd-5277552dbba3" containerID="39bb2864dfe41dad3b0916da7f55e8cd0f36e8ba1e010ab2ccc90904a4977c40" exitCode=0 Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.458566 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qk2s9" event={"ID":"0f447b1e-5bd0-49f1-9bbd-5277552dbba3","Type":"ContainerDied","Data":"39bb2864dfe41dad3b0916da7f55e8cd0f36e8ba1e010ab2ccc90904a4977c40"} Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.467426 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knqzt" event={"ID":"e0f269d4-265d-4c80-be6c-cff0634e8f87","Type":"ContainerStarted","Data":"2fcc06220c422c78be2599f4f27a291ee24c742c32ccd0f7d9859b58e7d013d1"} Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.470523 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sfj" event={"ID":"ec39f8a8-f28c-488a-8f02-e6c122084ddc","Type":"ContainerStarted","Data":"0302db08ea72636fcb9956d59d492f75f46d599ecac2029505fa902fb9a444dd"} Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.473736 4482 generic.go:334] "Generic (PLEG): container finished" podID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" containerID="8abe6058c24e8c79cc2478285c5bfabafac955c6fc34623efcca33e4ee4284ef" exitCode=0 Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.473807 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rzc2" event={"ID":"36a33d74-c23f-405e-a3c5-6f5a4de71e7a","Type":"ContainerDied","Data":"8abe6058c24e8c79cc2478285c5bfabafac955c6fc34623efcca33e4ee4284ef"} Nov 25 06:49:57 crc kubenswrapper[4482]: I1125 06:49:57.478598 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nkrg" event={"ID":"7388949f-6c3e-4c11-96b6-b8a7c6ed5765","Type":"ContainerStarted","Data":"bc1500d34d49702a0c235f8a0cb55b668446f72e7e7e4833d546564cec4e8893"} Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.487050 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qk2s9" event={"ID":"0f447b1e-5bd0-49f1-9bbd-5277552dbba3","Type":"ContainerStarted","Data":"b59e0ce0dd1a528d189b51867deb739f91328360c46886a298634023574593f8"} Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.491506 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwlcs" event={"ID":"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7","Type":"ContainerStarted","Data":"3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f"} Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.494288 4482 generic.go:334] "Generic (PLEG): container finished" podID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" containerID="bc1500d34d49702a0c235f8a0cb55b668446f72e7e7e4833d546564cec4e8893" exitCode=0 Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.494370 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nkrg" event={"ID":"7388949f-6c3e-4c11-96b6-b8a7c6ed5765","Type":"ContainerDied","Data":"bc1500d34d49702a0c235f8a0cb55b668446f72e7e7e4833d546564cec4e8893"} Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.496276 4482 generic.go:334] "Generic (PLEG): container finished" podID="e0f269d4-265d-4c80-be6c-cff0634e8f87" containerID="2fcc06220c422c78be2599f4f27a291ee24c742c32ccd0f7d9859b58e7d013d1" exitCode=0 Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.496359 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knqzt" event={"ID":"e0f269d4-265d-4c80-be6c-cff0634e8f87","Type":"ContainerDied","Data":"2fcc06220c422c78be2599f4f27a291ee24c742c32ccd0f7d9859b58e7d013d1"} Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.498600 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" event={"ID":"0a1c9846-2a7e-402e-985f-51a244241bd7","Type":"ContainerStarted","Data":"a36b3bd07c77e941c8728c630d835309b3ca9db09af80a648cfeb49422463334"} Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.498627 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2xhh4" event={"ID":"0a1c9846-2a7e-402e-985f-51a244241bd7","Type":"ContainerStarted","Data":"7caba6ba294f6aa80e16b41818f5f3ac4dc6b8b294483e633ee9b13d885f5e57"} Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.503606 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkkv" event={"ID":"ad24fc25-dae5-4720-81b3-0960ee86d505","Type":"ContainerStarted","Data":"3318e1eaa0afd591767c84b3b95b014031f298127f4da097a9390825c8642273"} Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.506026 4482 generic.go:334] "Generic (PLEG): container finished" podID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" containerID="0302db08ea72636fcb9956d59d492f75f46d599ecac2029505fa902fb9a444dd" exitCode=0 Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.506083 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sfj" event={"ID":"ec39f8a8-f28c-488a-8f02-e6c122084ddc","Type":"ContainerDied","Data":"0302db08ea72636fcb9956d59d492f75f46d599ecac2029505fa902fb9a444dd"} Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.513074 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rzc2" event={"ID":"36a33d74-c23f-405e-a3c5-6f5a4de71e7a","Type":"ContainerStarted","Data":"12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3"} Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.515632 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr27s" event={"ID":"74a51867-1870-4ee4-bd5d-66ac6f1e3201","Type":"ContainerStarted","Data":"5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37"} Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.519086 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qk2s9" podStartSLOduration=2.459121729 podStartE2EDuration="28.519073956s" podCreationTimestamp="2025-11-25 06:49:30 +0000 UTC" firstStartedPulling="2025-11-25 06:49:32.026158195 +0000 UTC m=+146.514389453" lastFinishedPulling="2025-11-25 06:49:58.08611042 +0000 UTC m=+172.574341680" observedRunningTime="2025-11-25 06:49:58.515338469 +0000 UTC m=+173.003569729" watchObservedRunningTime="2025-11-25 06:49:58.519073956 +0000 UTC m=+173.007305215" Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.531934 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rr27s" podStartSLOduration=3.456942012 podStartE2EDuration="30.531921569s" podCreationTimestamp="2025-11-25 06:49:28 +0000 UTC" firstStartedPulling="2025-11-25 06:49:30.922978042 +0000 UTC m=+145.411209300" lastFinishedPulling="2025-11-25 06:49:57.997957597 +0000 UTC m=+172.486188857" observedRunningTime="2025-11-25 06:49:58.528074824 +0000 UTC m=+173.016306082" watchObservedRunningTime="2025-11-25 06:49:58.531921569 +0000 UTC m=+173.020152828" Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.565474 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4vkkv" podStartSLOduration=2.592323392 podStartE2EDuration="28.56546094s" podCreationTimestamp="2025-11-25 06:49:30 +0000 UTC" firstStartedPulling="2025-11-25 06:49:32.068560953 +0000 UTC m=+146.556792212" lastFinishedPulling="2025-11-25 06:49:58.041698501 +0000 UTC m=+172.529929760" observedRunningTime="2025-11-25 06:49:58.564002188 +0000 UTC m=+173.052233458" watchObservedRunningTime="2025-11-25 06:49:58.56546094 +0000 UTC m=+173.053692198" Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.567229 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fwlcs" podStartSLOduration=2.457522908 podStartE2EDuration="29.5672211s" podCreationTimestamp="2025-11-25 06:49:29 +0000 UTC" firstStartedPulling="2025-11-25 06:49:30.953807 +0000 UTC m=+145.442038260" lastFinishedPulling="2025-11-25 06:49:58.063505192 +0000 UTC m=+172.551736452" observedRunningTime="2025-11-25 06:49:58.550112711 +0000 UTC m=+173.038343969" watchObservedRunningTime="2025-11-25 06:49:58.5672211 +0000 UTC m=+173.055452349" Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.599780 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2xhh4" podStartSLOduration=153.599770824 podStartE2EDuration="2m33.599770824s" podCreationTimestamp="2025-11-25 06:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:49:58.594401207 +0000 UTC m=+173.082632466" watchObservedRunningTime="2025-11-25 06:49:58.599770824 +0000 UTC m=+173.088002084" Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.663876 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5rzc2" podStartSLOduration=2.490866011 podStartE2EDuration="30.66386143s" podCreationTimestamp="2025-11-25 06:49:28 +0000 UTC" firstStartedPulling="2025-11-25 06:49:29.874661652 +0000 UTC m=+144.362892910" lastFinishedPulling="2025-11-25 06:49:58.047657071 +0000 UTC m=+172.535888329" observedRunningTime="2025-11-25 06:49:58.662948078 +0000 UTC m=+173.151179337" watchObservedRunningTime="2025-11-25 06:49:58.66386143 +0000 UTC m=+173.152092679" Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.936747 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:58 crc kubenswrapper[4482]: I1125 06:49:58.936793 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:49:59 crc kubenswrapper[4482]: I1125 06:49:59.093987 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:59 crc kubenswrapper[4482]: I1125 06:49:59.098342 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:49:59 crc kubenswrapper[4482]: I1125 06:49:59.530562 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sfj" event={"ID":"ec39f8a8-f28c-488a-8f02-e6c122084ddc","Type":"ContainerStarted","Data":"aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c"} Nov 25 06:49:59 crc kubenswrapper[4482]: I1125 06:49:59.533906 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nkrg" event={"ID":"7388949f-6c3e-4c11-96b6-b8a7c6ed5765","Type":"ContainerStarted","Data":"702efcbdd6091501e840aa017b955ce2893fbf5ca09acf70f45dedf31980efb2"} Nov 25 06:49:59 crc kubenswrapper[4482]: I1125 06:49:59.539522 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knqzt" event={"ID":"e0f269d4-265d-4c80-be6c-cff0634e8f87","Type":"ContainerStarted","Data":"50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3"} Nov 25 06:49:59 crc kubenswrapper[4482]: I1125 06:49:59.545070 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:59 crc kubenswrapper[4482]: I1125 06:49:59.545602 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:49:59 crc kubenswrapper[4482]: I1125 06:49:59.559817 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s9sfj" podStartSLOduration=2.63779178 podStartE2EDuration="27.55979976s" podCreationTimestamp="2025-11-25 06:49:32 +0000 UTC" firstStartedPulling="2025-11-25 06:49:34.122706515 +0000 UTC m=+148.610937775" lastFinishedPulling="2025-11-25 06:49:59.044714497 +0000 UTC m=+173.532945755" observedRunningTime="2025-11-25 06:49:59.555814403 +0000 UTC m=+174.044045661" watchObservedRunningTime="2025-11-25 06:49:59.55979976 +0000 UTC m=+174.048031008" Nov 25 06:49:59 crc kubenswrapper[4482]: I1125 06:49:59.596033 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9nkrg" podStartSLOduration=3.620110639 podStartE2EDuration="28.595998376s" podCreationTimestamp="2025-11-25 06:49:31 +0000 UTC" firstStartedPulling="2025-11-25 06:49:34.13996558 +0000 UTC m=+148.628196839" lastFinishedPulling="2025-11-25 06:49:59.115853318 +0000 UTC m=+173.604084576" observedRunningTime="2025-11-25 06:49:59.59167938 +0000 UTC m=+174.079910640" watchObservedRunningTime="2025-11-25 06:49:59.595998376 +0000 UTC m=+174.084229635" Nov 25 06:49:59 crc kubenswrapper[4482]: I1125 06:49:59.625768 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-knqzt" podStartSLOduration=3.492807821 podStartE2EDuration="31.625737759s" podCreationTimestamp="2025-11-25 06:49:28 +0000 UTC" firstStartedPulling="2025-11-25 06:49:30.938591852 +0000 UTC m=+145.426823101" lastFinishedPulling="2025-11-25 06:49:59.071521781 +0000 UTC m=+173.559753039" observedRunningTime="2025-11-25 06:49:59.62460849 +0000 UTC m=+174.112839749" watchObservedRunningTime="2025-11-25 06:49:59.625737759 +0000 UTC m=+174.113969018" Nov 25 06:50:00 crc kubenswrapper[4482]: I1125 06:50:00.031610 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5rzc2" podUID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" containerName="registry-server" probeResult="failure" output=< Nov 25 06:50:00 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 06:50:00 crc kubenswrapper[4482]: > Nov 25 06:50:00 crc kubenswrapper[4482]: I1125 06:50:00.145442 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rr27s" podUID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" containerName="registry-server" probeResult="failure" output=< Nov 25 06:50:00 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 06:50:00 crc kubenswrapper[4482]: > Nov 25 06:50:00 crc kubenswrapper[4482]: I1125 06:50:00.601919 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fwlcs" podUID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" containerName="registry-server" probeResult="failure" output=< Nov 25 06:50:00 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 06:50:00 crc kubenswrapper[4482]: > Nov 25 06:50:00 crc kubenswrapper[4482]: I1125 06:50:00.890418 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:50:00 crc kubenswrapper[4482]: I1125 06:50:00.890513 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:50:00 crc kubenswrapper[4482]: I1125 06:50:00.929943 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:50:01 crc kubenswrapper[4482]: I1125 06:50:01.293281 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:50:01 crc kubenswrapper[4482]: I1125 06:50:01.293331 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:50:01 crc kubenswrapper[4482]: I1125 06:50:01.329119 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:50:01 crc kubenswrapper[4482]: I1125 06:50:01.812979 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qgcvz" Nov 25 06:50:02 crc kubenswrapper[4482]: I1125 06:50:02.314209 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:50:02 crc kubenswrapper[4482]: I1125 06:50:02.314406 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:50:02 crc kubenswrapper[4482]: I1125 06:50:02.702458 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:50:02 crc kubenswrapper[4482]: I1125 06:50:02.702508 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:50:03 crc kubenswrapper[4482]: I1125 06:50:03.329853 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9nkrg" podUID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" containerName="registry-server" probeResult="failure" output=< Nov 25 06:50:03 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 06:50:03 crc kubenswrapper[4482]: > Nov 25 06:50:03 crc kubenswrapper[4482]: I1125 06:50:03.735640 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s9sfj" podUID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" containerName="registry-server" probeResult="failure" output=< Nov 25 06:50:03 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 06:50:03 crc kubenswrapper[4482]: > Nov 25 06:50:08 crc kubenswrapper[4482]: I1125 06:50:08.964411 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:50:08 crc kubenswrapper[4482]: I1125 06:50:08.992034 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:50:09 crc kubenswrapper[4482]: I1125 06:50:09.117630 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 06:50:09 crc kubenswrapper[4482]: I1125 06:50:09.117675 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 06:50:09 crc kubenswrapper[4482]: I1125 06:50:09.121327 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:50:09 crc kubenswrapper[4482]: I1125 06:50:09.148319 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:50:09 crc kubenswrapper[4482]: I1125 06:50:09.310972 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:50:09 crc kubenswrapper[4482]: I1125 06:50:09.311007 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:50:09 crc kubenswrapper[4482]: I1125 06:50:09.340942 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:50:09 crc kubenswrapper[4482]: I1125 06:50:09.589738 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:50:09 crc kubenswrapper[4482]: I1125 06:50:09.635920 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f8zk7"] Nov 25 06:50:09 crc kubenswrapper[4482]: I1125 06:50:09.637604 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:50:09 crc kubenswrapper[4482]: I1125 06:50:09.652397 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:50:10 crc kubenswrapper[4482]: I1125 06:50:10.919873 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:50:10 crc kubenswrapper[4482]: I1125 06:50:10.987301 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwlcs"] Nov 25 06:50:10 crc kubenswrapper[4482]: I1125 06:50:10.987498 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fwlcs" podUID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" containerName="registry-server" containerID="cri-o://3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f" gracePeriod=2 Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.318524 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.330324 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.493130 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-utilities\") pod \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\" (UID: \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\") " Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.493489 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj5zs\" (UniqueName: \"kubernetes.io/projected/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-kube-api-access-qj5zs\") pod \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\" (UID: \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\") " Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.493535 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-catalog-content\") pod \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\" (UID: \"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7\") " Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.494076 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-utilities" (OuterVolumeSpecName: "utilities") pod "51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" (UID: "51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.498408 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-kube-api-access-qj5zs" (OuterVolumeSpecName: "kube-api-access-qj5zs") pod "51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" (UID: "51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7"). InnerVolumeSpecName "kube-api-access-qj5zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.532319 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" (UID: "51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.587066 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-knqzt"] Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.595428 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.595454 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj5zs\" (UniqueName: \"kubernetes.io/projected/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-kube-api-access-qj5zs\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.595464 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.612383 4482 generic.go:334] "Generic (PLEG): container finished" podID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" containerID="3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f" exitCode=0 Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.612442 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwlcs" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.612495 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwlcs" event={"ID":"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7","Type":"ContainerDied","Data":"3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f"} Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.612545 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwlcs" event={"ID":"51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7","Type":"ContainerDied","Data":"8fa8fa6bf2012e20939cb13907578e0b9b9f384951e7387607e32a35dc8dd529"} Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.612565 4482 scope.go:117] "RemoveContainer" containerID="3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.612769 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-knqzt" podUID="e0f269d4-265d-4c80-be6c-cff0634e8f87" containerName="registry-server" containerID="cri-o://50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3" gracePeriod=2 Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.631554 4482 scope.go:117] "RemoveContainer" containerID="2537509c5cfcded5573f541c2a22ad766b6662b214ef38752de74b6d72147abb" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.637547 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwlcs"] Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.640863 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fwlcs"] Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.670531 4482 scope.go:117] "RemoveContainer" containerID="4798d634ba8cb8012918b5defade64256b2c0ed7b8a0039f08b70cbee2d1f54d" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.715688 4482 scope.go:117] "RemoveContainer" containerID="3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f" Nov 25 06:50:11 crc kubenswrapper[4482]: E1125 06:50:11.716296 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f\": container with ID starting with 3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f not found: ID does not exist" containerID="3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.716363 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f"} err="failed to get container status \"3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f\": rpc error: code = NotFound desc = could not find container \"3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f\": container with ID starting with 3a14d2a11d4cb47094352f153e20a27ec32193d630c1afa2189c21a010883a6f not found: ID does not exist" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.716459 4482 scope.go:117] "RemoveContainer" containerID="2537509c5cfcded5573f541c2a22ad766b6662b214ef38752de74b6d72147abb" Nov 25 06:50:11 crc kubenswrapper[4482]: E1125 06:50:11.716985 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2537509c5cfcded5573f541c2a22ad766b6662b214ef38752de74b6d72147abb\": container with ID starting with 2537509c5cfcded5573f541c2a22ad766b6662b214ef38752de74b6d72147abb not found: ID does not exist" containerID="2537509c5cfcded5573f541c2a22ad766b6662b214ef38752de74b6d72147abb" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.717073 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2537509c5cfcded5573f541c2a22ad766b6662b214ef38752de74b6d72147abb"} err="failed to get container status \"2537509c5cfcded5573f541c2a22ad766b6662b214ef38752de74b6d72147abb\": rpc error: code = NotFound desc = could not find container \"2537509c5cfcded5573f541c2a22ad766b6662b214ef38752de74b6d72147abb\": container with ID starting with 2537509c5cfcded5573f541c2a22ad766b6662b214ef38752de74b6d72147abb not found: ID does not exist" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.717146 4482 scope.go:117] "RemoveContainer" containerID="4798d634ba8cb8012918b5defade64256b2c0ed7b8a0039f08b70cbee2d1f54d" Nov 25 06:50:11 crc kubenswrapper[4482]: E1125 06:50:11.717484 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4798d634ba8cb8012918b5defade64256b2c0ed7b8a0039f08b70cbee2d1f54d\": container with ID starting with 4798d634ba8cb8012918b5defade64256b2c0ed7b8a0039f08b70cbee2d1f54d not found: ID does not exist" containerID="4798d634ba8cb8012918b5defade64256b2c0ed7b8a0039f08b70cbee2d1f54d" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.717524 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4798d634ba8cb8012918b5defade64256b2c0ed7b8a0039f08b70cbee2d1f54d"} err="failed to get container status \"4798d634ba8cb8012918b5defade64256b2c0ed7b8a0039f08b70cbee2d1f54d\": rpc error: code = NotFound desc = could not find container \"4798d634ba8cb8012918b5defade64256b2c0ed7b8a0039f08b70cbee2d1f54d\": container with ID starting with 4798d634ba8cb8012918b5defade64256b2c0ed7b8a0039f08b70cbee2d1f54d not found: ID does not exist" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.837074 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" path="/var/lib/kubelet/pods/51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7/volumes" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.897012 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:50:11 crc kubenswrapper[4482]: I1125 06:50:11.948652 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.001717 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hsm5\" (UniqueName: \"kubernetes.io/projected/e0f269d4-265d-4c80-be6c-cff0634e8f87-kube-api-access-2hsm5\") pod \"e0f269d4-265d-4c80-be6c-cff0634e8f87\" (UID: \"e0f269d4-265d-4c80-be6c-cff0634e8f87\") " Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.001771 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f269d4-265d-4c80-be6c-cff0634e8f87-catalog-content\") pod \"e0f269d4-265d-4c80-be6c-cff0634e8f87\" (UID: \"e0f269d4-265d-4c80-be6c-cff0634e8f87\") " Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.001837 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f269d4-265d-4c80-be6c-cff0634e8f87-utilities\") pod \"e0f269d4-265d-4c80-be6c-cff0634e8f87\" (UID: \"e0f269d4-265d-4c80-be6c-cff0634e8f87\") " Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.002774 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f269d4-265d-4c80-be6c-cff0634e8f87-utilities" (OuterVolumeSpecName: "utilities") pod "e0f269d4-265d-4c80-be6c-cff0634e8f87" (UID: "e0f269d4-265d-4c80-be6c-cff0634e8f87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.018441 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f269d4-265d-4c80-be6c-cff0634e8f87-kube-api-access-2hsm5" (OuterVolumeSpecName: "kube-api-access-2hsm5") pod "e0f269d4-265d-4c80-be6c-cff0634e8f87" (UID: "e0f269d4-265d-4c80-be6c-cff0634e8f87"). InnerVolumeSpecName "kube-api-access-2hsm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.039452 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f269d4-265d-4c80-be6c-cff0634e8f87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0f269d4-265d-4c80-be6c-cff0634e8f87" (UID: "e0f269d4-265d-4c80-be6c-cff0634e8f87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.103254 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hsm5\" (UniqueName: \"kubernetes.io/projected/e0f269d4-265d-4c80-be6c-cff0634e8f87-kube-api-access-2hsm5\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.103373 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0f269d4-265d-4c80-be6c-cff0634e8f87-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.103400 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0f269d4-265d-4c80-be6c-cff0634e8f87-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.333583 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.372930 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.621918 4482 generic.go:334] "Generic (PLEG): container finished" podID="e0f269d4-265d-4c80-be6c-cff0634e8f87" containerID="50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3" exitCode=0 Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.621988 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knqzt" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.622016 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knqzt" event={"ID":"e0f269d4-265d-4c80-be6c-cff0634e8f87","Type":"ContainerDied","Data":"50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3"} Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.622453 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knqzt" event={"ID":"e0f269d4-265d-4c80-be6c-cff0634e8f87","Type":"ContainerDied","Data":"bc9f0168dcaf7f325308f54106929b2a96ec3edf986b0537dbd0558ea449299f"} Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.622482 4482 scope.go:117] "RemoveContainer" containerID="50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.636473 4482 scope.go:117] "RemoveContainer" containerID="2fcc06220c422c78be2599f4f27a291ee24c742c32ccd0f7d9859b58e7d013d1" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.645452 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-knqzt"] Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.649938 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-knqzt"] Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.669235 4482 scope.go:117] "RemoveContainer" containerID="a3e398f1b50dad34e4ab51f92f513c9b0564b31bbf34717d69a8061b14641f3b" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.682421 4482 scope.go:117] "RemoveContainer" containerID="50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3" Nov 25 06:50:12 crc kubenswrapper[4482]: E1125 06:50:12.682769 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3\": container with ID starting with 50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3 not found: ID does not exist" containerID="50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.682859 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3"} err="failed to get container status \"50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3\": rpc error: code = NotFound desc = could not find container \"50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3\": container with ID starting with 50741cbe06ce34304efa1c9ba35a11a1546b89b94c3ca4378f1ddad1cfe309b3 not found: ID does not exist" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.682929 4482 scope.go:117] "RemoveContainer" containerID="2fcc06220c422c78be2599f4f27a291ee24c742c32ccd0f7d9859b58e7d013d1" Nov 25 06:50:12 crc kubenswrapper[4482]: E1125 06:50:12.683217 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fcc06220c422c78be2599f4f27a291ee24c742c32ccd0f7d9859b58e7d013d1\": container with ID starting with 2fcc06220c422c78be2599f4f27a291ee24c742c32ccd0f7d9859b58e7d013d1 not found: ID does not exist" containerID="2fcc06220c422c78be2599f4f27a291ee24c742c32ccd0f7d9859b58e7d013d1" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.683291 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcc06220c422c78be2599f4f27a291ee24c742c32ccd0f7d9859b58e7d013d1"} err="failed to get container status \"2fcc06220c422c78be2599f4f27a291ee24c742c32ccd0f7d9859b58e7d013d1\": rpc error: code = NotFound desc = could not find container \"2fcc06220c422c78be2599f4f27a291ee24c742c32ccd0f7d9859b58e7d013d1\": container with ID starting with 2fcc06220c422c78be2599f4f27a291ee24c742c32ccd0f7d9859b58e7d013d1 not found: ID does not exist" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.683360 4482 scope.go:117] "RemoveContainer" containerID="a3e398f1b50dad34e4ab51f92f513c9b0564b31bbf34717d69a8061b14641f3b" Nov 25 06:50:12 crc kubenswrapper[4482]: E1125 06:50:12.683587 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3e398f1b50dad34e4ab51f92f513c9b0564b31bbf34717d69a8061b14641f3b\": container with ID starting with a3e398f1b50dad34e4ab51f92f513c9b0564b31bbf34717d69a8061b14641f3b not found: ID does not exist" containerID="a3e398f1b50dad34e4ab51f92f513c9b0564b31bbf34717d69a8061b14641f3b" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.683663 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3e398f1b50dad34e4ab51f92f513c9b0564b31bbf34717d69a8061b14641f3b"} err="failed to get container status \"a3e398f1b50dad34e4ab51f92f513c9b0564b31bbf34717d69a8061b14641f3b\": rpc error: code = NotFound desc = could not find container \"a3e398f1b50dad34e4ab51f92f513c9b0564b31bbf34717d69a8061b14641f3b\": container with ID starting with a3e398f1b50dad34e4ab51f92f513c9b0564b31bbf34717d69a8061b14641f3b not found: ID does not exist" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.730952 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:50:12 crc kubenswrapper[4482]: I1125 06:50:12.765374 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.386939 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkkv"] Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.387178 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4vkkv" podUID="ad24fc25-dae5-4720-81b3-0960ee86d505" containerName="registry-server" containerID="cri-o://3318e1eaa0afd591767c84b3b95b014031f298127f4da097a9390825c8642273" gracePeriod=2 Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.630300 4482 generic.go:334] "Generic (PLEG): container finished" podID="ad24fc25-dae5-4720-81b3-0960ee86d505" containerID="3318e1eaa0afd591767c84b3b95b014031f298127f4da097a9390825c8642273" exitCode=0 Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.630459 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkkv" event={"ID":"ad24fc25-dae5-4720-81b3-0960ee86d505","Type":"ContainerDied","Data":"3318e1eaa0afd591767c84b3b95b014031f298127f4da097a9390825c8642273"} Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.681837 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.826521 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxwks\" (UniqueName: \"kubernetes.io/projected/ad24fc25-dae5-4720-81b3-0960ee86d505-kube-api-access-vxwks\") pod \"ad24fc25-dae5-4720-81b3-0960ee86d505\" (UID: \"ad24fc25-dae5-4720-81b3-0960ee86d505\") " Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.826562 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad24fc25-dae5-4720-81b3-0960ee86d505-utilities\") pod \"ad24fc25-dae5-4720-81b3-0960ee86d505\" (UID: \"ad24fc25-dae5-4720-81b3-0960ee86d505\") " Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.826582 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad24fc25-dae5-4720-81b3-0960ee86d505-catalog-content\") pod \"ad24fc25-dae5-4720-81b3-0960ee86d505\" (UID: \"ad24fc25-dae5-4720-81b3-0960ee86d505\") " Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.827625 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad24fc25-dae5-4720-81b3-0960ee86d505-utilities" (OuterVolumeSpecName: "utilities") pod "ad24fc25-dae5-4720-81b3-0960ee86d505" (UID: "ad24fc25-dae5-4720-81b3-0960ee86d505"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.830950 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad24fc25-dae5-4720-81b3-0960ee86d505-kube-api-access-vxwks" (OuterVolumeSpecName: "kube-api-access-vxwks") pod "ad24fc25-dae5-4720-81b3-0960ee86d505" (UID: "ad24fc25-dae5-4720-81b3-0960ee86d505"). InnerVolumeSpecName "kube-api-access-vxwks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.836760 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f269d4-265d-4c80-be6c-cff0634e8f87" path="/var/lib/kubelet/pods/e0f269d4-265d-4c80-be6c-cff0634e8f87/volumes" Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.841658 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad24fc25-dae5-4720-81b3-0960ee86d505-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad24fc25-dae5-4720-81b3-0960ee86d505" (UID: "ad24fc25-dae5-4720-81b3-0960ee86d505"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.927837 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxwks\" (UniqueName: \"kubernetes.io/projected/ad24fc25-dae5-4720-81b3-0960ee86d505-kube-api-access-vxwks\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.927864 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad24fc25-dae5-4720-81b3-0960ee86d505-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:13 crc kubenswrapper[4482]: I1125 06:50:13.927874 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad24fc25-dae5-4720-81b3-0960ee86d505-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:14 crc kubenswrapper[4482]: I1125 06:50:14.637740 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkkv" event={"ID":"ad24fc25-dae5-4720-81b3-0960ee86d505","Type":"ContainerDied","Data":"26f3ee6155f3ccfc21955939e113a9020bc4401697b242a1602dfe9f0e5518dd"} Nov 25 06:50:14 crc kubenswrapper[4482]: I1125 06:50:14.637794 4482 scope.go:117] "RemoveContainer" containerID="3318e1eaa0afd591767c84b3b95b014031f298127f4da097a9390825c8642273" Nov 25 06:50:14 crc kubenswrapper[4482]: I1125 06:50:14.637799 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vkkv" Nov 25 06:50:14 crc kubenswrapper[4482]: I1125 06:50:14.650961 4482 scope.go:117] "RemoveContainer" containerID="41108f0aaa2899ece0e375e5a95caa435a1921f4816213de10ef9725368767ad" Nov 25 06:50:14 crc kubenswrapper[4482]: I1125 06:50:14.662359 4482 scope.go:117] "RemoveContainer" containerID="5a82a987133ea3ce5962b48a4b6abd573e82db1b076655ac77fc017b1a624eb2" Nov 25 06:50:14 crc kubenswrapper[4482]: I1125 06:50:14.665759 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkkv"] Nov 25 06:50:14 crc kubenswrapper[4482]: I1125 06:50:14.673212 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkkv"] Nov 25 06:50:15 crc kubenswrapper[4482]: I1125 06:50:15.836846 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad24fc25-dae5-4720-81b3-0960ee86d505" path="/var/lib/kubelet/pods/ad24fc25-dae5-4720-81b3-0960ee86d505/volumes" Nov 25 06:50:15 crc kubenswrapper[4482]: I1125 06:50:15.987761 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s9sfj"] Nov 25 06:50:15 crc kubenswrapper[4482]: I1125 06:50:15.987951 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s9sfj" podUID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" containerName="registry-server" containerID="cri-o://aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c" gracePeriod=2 Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.273429 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.367933 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec39f8a8-f28c-488a-8f02-e6c122084ddc-utilities\") pod \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\" (UID: \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\") " Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.368272 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q52xk\" (UniqueName: \"kubernetes.io/projected/ec39f8a8-f28c-488a-8f02-e6c122084ddc-kube-api-access-q52xk\") pod \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\" (UID: \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\") " Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.368312 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec39f8a8-f28c-488a-8f02-e6c122084ddc-catalog-content\") pod \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\" (UID: \"ec39f8a8-f28c-488a-8f02-e6c122084ddc\") " Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.368573 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec39f8a8-f28c-488a-8f02-e6c122084ddc-utilities" (OuterVolumeSpecName: "utilities") pod "ec39f8a8-f28c-488a-8f02-e6c122084ddc" (UID: "ec39f8a8-f28c-488a-8f02-e6c122084ddc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.368836 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec39f8a8-f28c-488a-8f02-e6c122084ddc-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.373380 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec39f8a8-f28c-488a-8f02-e6c122084ddc-kube-api-access-q52xk" (OuterVolumeSpecName: "kube-api-access-q52xk") pod "ec39f8a8-f28c-488a-8f02-e6c122084ddc" (UID: "ec39f8a8-f28c-488a-8f02-e6c122084ddc"). InnerVolumeSpecName "kube-api-access-q52xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.432983 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec39f8a8-f28c-488a-8f02-e6c122084ddc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec39f8a8-f28c-488a-8f02-e6c122084ddc" (UID: "ec39f8a8-f28c-488a-8f02-e6c122084ddc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.469794 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q52xk\" (UniqueName: \"kubernetes.io/projected/ec39f8a8-f28c-488a-8f02-e6c122084ddc-kube-api-access-q52xk\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.469824 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec39f8a8-f28c-488a-8f02-e6c122084ddc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.651993 4482 generic.go:334] "Generic (PLEG): container finished" podID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" containerID="aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c" exitCode=0 Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.652046 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sfj" event={"ID":"ec39f8a8-f28c-488a-8f02-e6c122084ddc","Type":"ContainerDied","Data":"aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c"} Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.652095 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sfj" event={"ID":"ec39f8a8-f28c-488a-8f02-e6c122084ddc","Type":"ContainerDied","Data":"e9b832a34966f2aae88c480cc66469632e5941ae70f7bab49858453e580279dc"} Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.652121 4482 scope.go:117] "RemoveContainer" containerID="aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.652288 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9sfj" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.663776 4482 scope.go:117] "RemoveContainer" containerID="0302db08ea72636fcb9956d59d492f75f46d599ecac2029505fa902fb9a444dd" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.674607 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s9sfj"] Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.677514 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s9sfj"] Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.693185 4482 scope.go:117] "RemoveContainer" containerID="16340898b99bf5f0c3077592ec35159ef687970d5a48058739310ab2a5b012a9" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.707095 4482 scope.go:117] "RemoveContainer" containerID="aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c" Nov 25 06:50:16 crc kubenswrapper[4482]: E1125 06:50:16.707583 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c\": container with ID starting with aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c not found: ID does not exist" containerID="aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.707628 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c"} err="failed to get container status \"aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c\": rpc error: code = NotFound desc = could not find container \"aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c\": container with ID starting with aaf13a645412de8efad8c85611624521913ebd6a06498c4173047c58c616e97c not found: ID does not exist" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.707655 4482 scope.go:117] "RemoveContainer" containerID="0302db08ea72636fcb9956d59d492f75f46d599ecac2029505fa902fb9a444dd" Nov 25 06:50:16 crc kubenswrapper[4482]: E1125 06:50:16.708291 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0302db08ea72636fcb9956d59d492f75f46d599ecac2029505fa902fb9a444dd\": container with ID starting with 0302db08ea72636fcb9956d59d492f75f46d599ecac2029505fa902fb9a444dd not found: ID does not exist" containerID="0302db08ea72636fcb9956d59d492f75f46d599ecac2029505fa902fb9a444dd" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.708320 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0302db08ea72636fcb9956d59d492f75f46d599ecac2029505fa902fb9a444dd"} err="failed to get container status \"0302db08ea72636fcb9956d59d492f75f46d599ecac2029505fa902fb9a444dd\": rpc error: code = NotFound desc = could not find container \"0302db08ea72636fcb9956d59d492f75f46d599ecac2029505fa902fb9a444dd\": container with ID starting with 0302db08ea72636fcb9956d59d492f75f46d599ecac2029505fa902fb9a444dd not found: ID does not exist" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.708340 4482 scope.go:117] "RemoveContainer" containerID="16340898b99bf5f0c3077592ec35159ef687970d5a48058739310ab2a5b012a9" Nov 25 06:50:16 crc kubenswrapper[4482]: E1125 06:50:16.708638 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16340898b99bf5f0c3077592ec35159ef687970d5a48058739310ab2a5b012a9\": container with ID starting with 16340898b99bf5f0c3077592ec35159ef687970d5a48058739310ab2a5b012a9 not found: ID does not exist" containerID="16340898b99bf5f0c3077592ec35159ef687970d5a48058739310ab2a5b012a9" Nov 25 06:50:16 crc kubenswrapper[4482]: I1125 06:50:16.708662 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16340898b99bf5f0c3077592ec35159ef687970d5a48058739310ab2a5b012a9"} err="failed to get container status \"16340898b99bf5f0c3077592ec35159ef687970d5a48058739310ab2a5b012a9\": rpc error: code = NotFound desc = could not find container \"16340898b99bf5f0c3077592ec35159ef687970d5a48058739310ab2a5b012a9\": container with ID starting with 16340898b99bf5f0c3077592ec35159ef687970d5a48058739310ab2a5b012a9 not found: ID does not exist" Nov 25 06:50:17 crc kubenswrapper[4482]: I1125 06:50:17.835862 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" path="/var/lib/kubelet/pods/ec39f8a8-f28c-488a-8f02-e6c122084ddc/volumes" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.656626 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" podUID="61e22994-72d9-477f-8f3f-89a77ade8196" containerName="oauth-openshift" containerID="cri-o://9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a" gracePeriod=15 Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.952446 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.981677 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-767898c5d8-tznnz"] Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.981869 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" containerName="extract-content" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.981921 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" containerName="extract-content" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.981932 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f269d4-265d-4c80-be6c-cff0634e8f87" containerName="extract-utilities" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.981939 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f269d4-265d-4c80-be6c-cff0634e8f87" containerName="extract-utilities" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.981953 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e22994-72d9-477f-8f3f-89a77ade8196" containerName="oauth-openshift" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.981958 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e22994-72d9-477f-8f3f-89a77ade8196" containerName="oauth-openshift" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.981967 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f269d4-265d-4c80-be6c-cff0634e8f87" containerName="extract-content" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.981972 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f269d4-265d-4c80-be6c-cff0634e8f87" containerName="extract-content" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.981979 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37beea46-1843-4974-9dab-e2052f6d80b1" containerName="pruner" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.981984 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="37beea46-1843-4974-9dab-e2052f6d80b1" containerName="pruner" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.981990 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" containerName="registry-server" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.981995 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" containerName="registry-server" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.982002 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" containerName="extract-content" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982007 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" containerName="extract-content" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.982013 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9326ccfa-b7f4-4e47-879b-5379fbef0702" containerName="pruner" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982019 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9326ccfa-b7f4-4e47-879b-5379fbef0702" containerName="pruner" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.982025 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" containerName="extract-utilities" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982032 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" containerName="extract-utilities" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.982038 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f269d4-265d-4c80-be6c-cff0634e8f87" containerName="registry-server" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982042 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f269d4-265d-4c80-be6c-cff0634e8f87" containerName="registry-server" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.982049 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff92469-ca47-4359-b56a-8df7332739ab" containerName="collect-profiles" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982054 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff92469-ca47-4359-b56a-8df7332739ab" containerName="collect-profiles" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.982062 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad24fc25-dae5-4720-81b3-0960ee86d505" containerName="extract-content" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982067 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad24fc25-dae5-4720-81b3-0960ee86d505" containerName="extract-content" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.982076 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad24fc25-dae5-4720-81b3-0960ee86d505" containerName="registry-server" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982081 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad24fc25-dae5-4720-81b3-0960ee86d505" containerName="registry-server" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.982089 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" containerName="extract-utilities" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982095 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" containerName="extract-utilities" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.982103 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad24fc25-dae5-4720-81b3-0960ee86d505" containerName="extract-utilities" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982108 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad24fc25-dae5-4720-81b3-0960ee86d505" containerName="extract-utilities" Nov 25 06:50:34 crc kubenswrapper[4482]: E1125 06:50:34.982126 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" containerName="registry-server" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982132 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" containerName="registry-server" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982245 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f269d4-265d-4c80-be6c-cff0634e8f87" containerName="registry-server" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982254 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec39f8a8-f28c-488a-8f02-e6c122084ddc" containerName="registry-server" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982260 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad24fc25-dae5-4720-81b3-0960ee86d505" containerName="registry-server" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982267 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e22994-72d9-477f-8f3f-89a77ade8196" containerName="oauth-openshift" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982272 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="9326ccfa-b7f4-4e47-879b-5379fbef0702" containerName="pruner" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982279 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff92469-ca47-4359-b56a-8df7332739ab" containerName="collect-profiles" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982300 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f6e9d3-38ab-4a73-89a5-ba5cfdd35af7" containerName="registry-server" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982309 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="37beea46-1843-4974-9dab-e2052f6d80b1" containerName="pruner" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.982645 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:34 crc kubenswrapper[4482]: I1125 06:50:34.989535 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-767898c5d8-tznnz"] Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075432 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-error\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075492 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-serving-cert\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075515 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-ocp-branding-template\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075545 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-login\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075566 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-cliconfig\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075592 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-session\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075622 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-service-ca\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075641 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-trusted-ca-bundle\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075664 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61e22994-72d9-477f-8f3f-89a77ade8196-audit-dir\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075683 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-idp-0-file-data\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075701 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-router-certs\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075717 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5pfr\" (UniqueName: \"kubernetes.io/projected/61e22994-72d9-477f-8f3f-89a77ade8196-kube-api-access-g5pfr\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075740 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-provider-selection\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075759 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-audit-policies\") pod \"61e22994-72d9-477f-8f3f-89a77ade8196\" (UID: \"61e22994-72d9-477f-8f3f-89a77ade8196\") " Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075820 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-service-ca\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075845 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075863 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075891 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg7qz\" (UniqueName: \"kubernetes.io/projected/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-kube-api-access-rg7qz\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075909 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-audit-policies\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075938 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075956 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-session\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075971 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-router-certs\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.075993 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-user-template-login\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.076011 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.076027 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-user-template-error\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.076051 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-audit-dir\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.076071 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.076095 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.077192 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.077251 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.077578 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.084428 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e22994-72d9-477f-8f3f-89a77ade8196-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.084637 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.085950 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.086187 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.086741 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.086837 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e22994-72d9-477f-8f3f-89a77ade8196-kube-api-access-g5pfr" (OuterVolumeSpecName: "kube-api-access-g5pfr") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "kube-api-access-g5pfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.086895 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.087069 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.087254 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.087366 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.087368 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "61e22994-72d9-477f-8f3f-89a77ade8196" (UID: "61e22994-72d9-477f-8f3f-89a77ade8196"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.177820 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-audit-dir\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.177910 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.177959 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.177996 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-service-ca\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.178035 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.178061 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.178089 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg7qz\" (UniqueName: \"kubernetes.io/projected/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-kube-api-access-rg7qz\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.178114 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-audit-policies\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.178186 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.178208 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-session\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.178235 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-router-certs\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.178269 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-user-template-login\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.178299 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.178320 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-user-template-error\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.178475 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-audit-dir\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.179594 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-audit-policies\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.179846 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-service-ca\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.180210 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.180836 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181464 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181573 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181590 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181601 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181638 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181649 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181660 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181669 4482 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/61e22994-72d9-477f-8f3f-89a77ade8196-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181680 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181948 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5pfr\" (UniqueName: \"kubernetes.io/projected/61e22994-72d9-477f-8f3f-89a77ade8196-kube-api-access-g5pfr\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181959 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181969 4482 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/61e22994-72d9-477f-8f3f-89a77ade8196-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181980 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181992 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.182000 4482 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/61e22994-72d9-477f-8f3f-89a77ade8196-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.181697 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.182434 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.182619 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.183382 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-router-certs\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.184184 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-user-template-login\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.184948 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-user-template-error\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.185359 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-v4-0-config-system-session\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.192428 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg7qz\" (UniqueName: \"kubernetes.io/projected/5ba578d2-6978-4ac8-b2ab-ec283aa1a18d-kube-api-access-rg7qz\") pod \"oauth-openshift-767898c5d8-tznnz\" (UID: \"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d\") " pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.296668 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.632260 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-767898c5d8-tznnz"] Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.746695 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" event={"ID":"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d","Type":"ContainerStarted","Data":"521dae63eca9081b55bdd97952faab2fbb8ab3d2a6edd6c211d1369a827a0b60"} Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.748148 4482 generic.go:334] "Generic (PLEG): container finished" podID="61e22994-72d9-477f-8f3f-89a77ade8196" containerID="9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a" exitCode=0 Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.748222 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" event={"ID":"61e22994-72d9-477f-8f3f-89a77ade8196","Type":"ContainerDied","Data":"9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a"} Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.748243 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" event={"ID":"61e22994-72d9-477f-8f3f-89a77ade8196","Type":"ContainerDied","Data":"55d8fa39095ca86072f32975b63d100616a08fd19b3f6199a2da3f58f0b2f91d"} Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.748244 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f8zk7" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.748259 4482 scope.go:117] "RemoveContainer" containerID="9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.801506 4482 scope.go:117] "RemoveContainer" containerID="9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a" Nov 25 06:50:35 crc kubenswrapper[4482]: E1125 06:50:35.801948 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a\": container with ID starting with 9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a not found: ID does not exist" containerID="9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.801973 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a"} err="failed to get container status \"9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a\": rpc error: code = NotFound desc = could not find container \"9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a\": container with ID starting with 9506ef3a529177c01ae6521bc2c252d1c3e8f15e9ef7a070e19fd9d88fa99b4a not found: ID does not exist" Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.821937 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f8zk7"] Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.826942 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f8zk7"] Nov 25 06:50:35 crc kubenswrapper[4482]: I1125 06:50:35.836866 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61e22994-72d9-477f-8f3f-89a77ade8196" path="/var/lib/kubelet/pods/61e22994-72d9-477f-8f3f-89a77ade8196/volumes" Nov 25 06:50:36 crc kubenswrapper[4482]: I1125 06:50:36.755911 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" event={"ID":"5ba578d2-6978-4ac8-b2ab-ec283aa1a18d","Type":"ContainerStarted","Data":"2322130e5e2e49ebec5052b44846f682585a3c178a26d0f37e4fb489f51433eb"} Nov 25 06:50:36 crc kubenswrapper[4482]: I1125 06:50:36.756559 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:36 crc kubenswrapper[4482]: I1125 06:50:36.766953 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" Nov 25 06:50:36 crc kubenswrapper[4482]: I1125 06:50:36.780930 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-767898c5d8-tznnz" podStartSLOduration=27.780907992 podStartE2EDuration="27.780907992s" podCreationTimestamp="2025-11-25 06:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:50:36.778115302 +0000 UTC m=+211.266346562" watchObservedRunningTime="2025-11-25 06:50:36.780907992 +0000 UTC m=+211.269139251" Nov 25 06:50:39 crc kubenswrapper[4482]: I1125 06:50:39.117992 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 06:50:39 crc kubenswrapper[4482]: I1125 06:50:39.118420 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 06:50:39 crc kubenswrapper[4482]: I1125 06:50:39.118483 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:50:39 crc kubenswrapper[4482]: I1125 06:50:39.119244 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 06:50:39 crc kubenswrapper[4482]: I1125 06:50:39.119299 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742" gracePeriod=600 Nov 25 06:50:39 crc kubenswrapper[4482]: I1125 06:50:39.776955 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742" exitCode=0 Nov 25 06:50:39 crc kubenswrapper[4482]: I1125 06:50:39.777078 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742"} Nov 25 06:50:39 crc kubenswrapper[4482]: I1125 06:50:39.777351 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"b9556eecd99aaa627f2f8338b1f2e2766518897cc04a75034690120a70e07dff"} Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.429146 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5rzc2"] Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.430436 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5rzc2" podUID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" containerName="registry-server" containerID="cri-o://12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3" gracePeriod=30 Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.436695 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rr27s"] Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.436888 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rr27s" podUID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" containerName="registry-server" containerID="cri-o://5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37" gracePeriod=30 Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.450873 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2h8cx"] Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.451141 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" podUID="8200abb3-4189-4dae-b0d3-9f09c330e278" containerName="marketplace-operator" containerID="cri-o://3611aa54af4ef37f4d560c8d12207c8ec89e0ac797a19216fa57c63c7a9ce437" gracePeriod=30 Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.454402 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qk2s9"] Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.454645 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qk2s9" podUID="0f447b1e-5bd0-49f1-9bbd-5277552dbba3" containerName="registry-server" containerID="cri-o://b59e0ce0dd1a528d189b51867deb739f91328360c46886a298634023574593f8" gracePeriod=30 Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.455419 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nkrg"] Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.455603 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9nkrg" podUID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" containerName="registry-server" containerID="cri-o://702efcbdd6091501e840aa017b955ce2893fbf5ca09acf70f45dedf31980efb2" gracePeriod=30 Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.459670 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8mb4t"] Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.460287 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.467185 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8mb4t"] Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.503270 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6ccf816-7e8c-48db-8ab9-185bb05526f7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8mb4t\" (UID: \"d6ccf816-7e8c-48db-8ab9-185bb05526f7\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.503450 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgdlr\" (UniqueName: \"kubernetes.io/projected/d6ccf816-7e8c-48db-8ab9-185bb05526f7-kube-api-access-dgdlr\") pod \"marketplace-operator-79b997595-8mb4t\" (UID: \"d6ccf816-7e8c-48db-8ab9-185bb05526f7\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.503547 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6ccf816-7e8c-48db-8ab9-185bb05526f7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8mb4t\" (UID: \"d6ccf816-7e8c-48db-8ab9-185bb05526f7\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.604000 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgdlr\" (UniqueName: \"kubernetes.io/projected/d6ccf816-7e8c-48db-8ab9-185bb05526f7-kube-api-access-dgdlr\") pod \"marketplace-operator-79b997595-8mb4t\" (UID: \"d6ccf816-7e8c-48db-8ab9-185bb05526f7\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.604044 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6ccf816-7e8c-48db-8ab9-185bb05526f7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8mb4t\" (UID: \"d6ccf816-7e8c-48db-8ab9-185bb05526f7\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.604097 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6ccf816-7e8c-48db-8ab9-185bb05526f7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8mb4t\" (UID: \"d6ccf816-7e8c-48db-8ab9-185bb05526f7\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.605115 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6ccf816-7e8c-48db-8ab9-185bb05526f7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8mb4t\" (UID: \"d6ccf816-7e8c-48db-8ab9-185bb05526f7\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.616631 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6ccf816-7e8c-48db-8ab9-185bb05526f7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8mb4t\" (UID: \"d6ccf816-7e8c-48db-8ab9-185bb05526f7\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.619562 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgdlr\" (UniqueName: \"kubernetes.io/projected/d6ccf816-7e8c-48db-8ab9-185bb05526f7-kube-api-access-dgdlr\") pod \"marketplace-operator-79b997595-8mb4t\" (UID: \"d6ccf816-7e8c-48db-8ab9-185bb05526f7\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.762400 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.779727 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.805971 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-catalog-content\") pod \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\" (UID: \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\") " Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.806001 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-utilities\") pod \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\" (UID: \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\") " Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.806044 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v89m7\" (UniqueName: \"kubernetes.io/projected/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-kube-api-access-v89m7\") pod \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\" (UID: \"36a33d74-c23f-405e-a3c5-6f5a4de71e7a\") " Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.810835 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-utilities" (OuterVolumeSpecName: "utilities") pod "36a33d74-c23f-405e-a3c5-6f5a4de71e7a" (UID: "36a33d74-c23f-405e-a3c5-6f5a4de71e7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.810851 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-kube-api-access-v89m7" (OuterVolumeSpecName: "kube-api-access-v89m7") pod "36a33d74-c23f-405e-a3c5-6f5a4de71e7a" (UID: "36a33d74-c23f-405e-a3c5-6f5a4de71e7a"). InnerVolumeSpecName "kube-api-access-v89m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.851507 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36a33d74-c23f-405e-a3c5-6f5a4de71e7a" (UID: "36a33d74-c23f-405e-a3c5-6f5a4de71e7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.907316 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v89m7\" (UniqueName: \"kubernetes.io/projected/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-kube-api-access-v89m7\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.907344 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.907353 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36a33d74-c23f-405e-a3c5-6f5a4de71e7a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.938847 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.951925 4482 generic.go:334] "Generic (PLEG): container finished" podID="0f447b1e-5bd0-49f1-9bbd-5277552dbba3" containerID="b59e0ce0dd1a528d189b51867deb739f91328360c46886a298634023574593f8" exitCode=0 Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.952000 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qk2s9" event={"ID":"0f447b1e-5bd0-49f1-9bbd-5277552dbba3","Type":"ContainerDied","Data":"b59e0ce0dd1a528d189b51867deb739f91328360c46886a298634023574593f8"} Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.963330 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.968997 4482 generic.go:334] "Generic (PLEG): container finished" podID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" containerID="702efcbdd6091501e840aa017b955ce2893fbf5ca09acf70f45dedf31980efb2" exitCode=0 Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.969048 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nkrg" event={"ID":"7388949f-6c3e-4c11-96b6-b8a7c6ed5765","Type":"ContainerDied","Data":"702efcbdd6091501e840aa017b955ce2893fbf5ca09acf70f45dedf31980efb2"} Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.969567 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.971309 4482 generic.go:334] "Generic (PLEG): container finished" podID="8200abb3-4189-4dae-b0d3-9f09c330e278" containerID="3611aa54af4ef37f4d560c8d12207c8ec89e0ac797a19216fa57c63c7a9ce437" exitCode=0 Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.971370 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" event={"ID":"8200abb3-4189-4dae-b0d3-9f09c330e278","Type":"ContainerDied","Data":"3611aa54af4ef37f4d560c8d12207c8ec89e0ac797a19216fa57c63c7a9ce437"} Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.971402 4482 scope.go:117] "RemoveContainer" containerID="3611aa54af4ef37f4d560c8d12207c8ec89e0ac797a19216fa57c63c7a9ce437" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.989718 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.989862 4482 generic.go:334] "Generic (PLEG): container finished" podID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" containerID="12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3" exitCode=0 Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.989944 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rzc2" event={"ID":"36a33d74-c23f-405e-a3c5-6f5a4de71e7a","Type":"ContainerDied","Data":"12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3"} Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.989978 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5rzc2" event={"ID":"36a33d74-c23f-405e-a3c5-6f5a4de71e7a","Type":"ContainerDied","Data":"7707eafeca8af3fd8d7a7c0761c6c7e05071a66b2059482280654d0a198393ac"} Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.990026 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5rzc2" Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.995197 4482 generic.go:334] "Generic (PLEG): container finished" podID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" containerID="5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37" exitCode=0 Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.995237 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr27s" event={"ID":"74a51867-1870-4ee4-bd5d-66ac6f1e3201","Type":"ContainerDied","Data":"5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37"} Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.995259 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr27s" event={"ID":"74a51867-1870-4ee4-bd5d-66ac6f1e3201","Type":"ContainerDied","Data":"1073782d991dd358e8a0769544815e1bd787b8f557b02b93986279898df10f17"} Nov 25 06:51:03 crc kubenswrapper[4482]: I1125 06:51:03.995330 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rr27s" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.012851 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a51867-1870-4ee4-bd5d-66ac6f1e3201-utilities\") pod \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\" (UID: \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\") " Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.012887 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw5cg\" (UniqueName: \"kubernetes.io/projected/8200abb3-4189-4dae-b0d3-9f09c330e278-kube-api-access-hw5cg\") pod \"8200abb3-4189-4dae-b0d3-9f09c330e278\" (UID: \"8200abb3-4189-4dae-b0d3-9f09c330e278\") " Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.012918 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-catalog-content\") pod \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\" (UID: \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\") " Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.013019 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qcv6\" (UniqueName: \"kubernetes.io/projected/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-kube-api-access-7qcv6\") pod \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\" (UID: \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\") " Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.013047 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dm5z\" (UniqueName: \"kubernetes.io/projected/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-kube-api-access-7dm5z\") pod \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\" (UID: \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\") " Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.013079 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvgjl\" (UniqueName: \"kubernetes.io/projected/74a51867-1870-4ee4-bd5d-66ac6f1e3201-kube-api-access-mvgjl\") pod \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\" (UID: \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\") " Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.013117 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8200abb3-4189-4dae-b0d3-9f09c330e278-marketplace-trusted-ca\") pod \"8200abb3-4189-4dae-b0d3-9f09c330e278\" (UID: \"8200abb3-4189-4dae-b0d3-9f09c330e278\") " Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.013645 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-catalog-content\") pod \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\" (UID: \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\") " Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.013902 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-utilities\") pod \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\" (UID: \"0f447b1e-5bd0-49f1-9bbd-5277552dbba3\") " Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.013947 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-utilities\") pod \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\" (UID: \"7388949f-6c3e-4c11-96b6-b8a7c6ed5765\") " Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.013970 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8200abb3-4189-4dae-b0d3-9f09c330e278-marketplace-operator-metrics\") pod \"8200abb3-4189-4dae-b0d3-9f09c330e278\" (UID: \"8200abb3-4189-4dae-b0d3-9f09c330e278\") " Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.013990 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a51867-1870-4ee4-bd5d-66ac6f1e3201-catalog-content\") pod \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\" (UID: \"74a51867-1870-4ee4-bd5d-66ac6f1e3201\") " Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.016717 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a51867-1870-4ee4-bd5d-66ac6f1e3201-utilities" (OuterVolumeSpecName: "utilities") pod "74a51867-1870-4ee4-bd5d-66ac6f1e3201" (UID: "74a51867-1870-4ee4-bd5d-66ac6f1e3201"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.017692 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8200abb3-4189-4dae-b0d3-9f09c330e278-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8200abb3-4189-4dae-b0d3-9f09c330e278" (UID: "8200abb3-4189-4dae-b0d3-9f09c330e278"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.021224 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8200abb3-4189-4dae-b0d3-9f09c330e278-kube-api-access-hw5cg" (OuterVolumeSpecName: "kube-api-access-hw5cg") pod "8200abb3-4189-4dae-b0d3-9f09c330e278" (UID: "8200abb3-4189-4dae-b0d3-9f09c330e278"). InnerVolumeSpecName "kube-api-access-hw5cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.023573 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-utilities" (OuterVolumeSpecName: "utilities") pod "7388949f-6c3e-4c11-96b6-b8a7c6ed5765" (UID: "7388949f-6c3e-4c11-96b6-b8a7c6ed5765"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.027364 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8200abb3-4189-4dae-b0d3-9f09c330e278-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8200abb3-4189-4dae-b0d3-9f09c330e278" (UID: "8200abb3-4189-4dae-b0d3-9f09c330e278"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.029291 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-utilities" (OuterVolumeSpecName: "utilities") pod "0f447b1e-5bd0-49f1-9bbd-5277552dbba3" (UID: "0f447b1e-5bd0-49f1-9bbd-5277552dbba3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.029373 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-kube-api-access-7qcv6" (OuterVolumeSpecName: "kube-api-access-7qcv6") pod "7388949f-6c3e-4c11-96b6-b8a7c6ed5765" (UID: "7388949f-6c3e-4c11-96b6-b8a7c6ed5765"). InnerVolumeSpecName "kube-api-access-7qcv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.029410 4482 scope.go:117] "RemoveContainer" containerID="12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.052838 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a51867-1870-4ee4-bd5d-66ac6f1e3201-kube-api-access-mvgjl" (OuterVolumeSpecName: "kube-api-access-mvgjl") pod "74a51867-1870-4ee4-bd5d-66ac6f1e3201" (UID: "74a51867-1870-4ee4-bd5d-66ac6f1e3201"). InnerVolumeSpecName "kube-api-access-mvgjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.080365 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5rzc2"] Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.080794 4482 scope.go:117] "RemoveContainer" containerID="8abe6058c24e8c79cc2478285c5bfabafac955c6fc34623efcca33e4ee4284ef" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.082697 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5rzc2"] Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.091110 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f447b1e-5bd0-49f1-9bbd-5277552dbba3" (UID: "0f447b1e-5bd0-49f1-9bbd-5277552dbba3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.094585 4482 scope.go:117] "RemoveContainer" containerID="6965b666d02688c9dc593712d60580ef3e94fd94aa2006dd99cec5617ccb85fa" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.099236 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-kube-api-access-7dm5z" (OuterVolumeSpecName: "kube-api-access-7dm5z") pod "0f447b1e-5bd0-49f1-9bbd-5277552dbba3" (UID: "0f447b1e-5bd0-49f1-9bbd-5277552dbba3"). InnerVolumeSpecName "kube-api-access-7dm5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.104375 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a51867-1870-4ee4-bd5d-66ac6f1e3201-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74a51867-1870-4ee4-bd5d-66ac6f1e3201" (UID: "74a51867-1870-4ee4-bd5d-66ac6f1e3201"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.106814 4482 scope.go:117] "RemoveContainer" containerID="12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3" Nov 25 06:51:04 crc kubenswrapper[4482]: E1125 06:51:04.107145 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3\": container with ID starting with 12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3 not found: ID does not exist" containerID="12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.107202 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3"} err="failed to get container status \"12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3\": rpc error: code = NotFound desc = could not find container \"12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3\": container with ID starting with 12dc077bcceded9a97d9441582f6e861e2a601b5464da5f29e05342eb301b7c3 not found: ID does not exist" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.107238 4482 scope.go:117] "RemoveContainer" containerID="8abe6058c24e8c79cc2478285c5bfabafac955c6fc34623efcca33e4ee4284ef" Nov 25 06:51:04 crc kubenswrapper[4482]: E1125 06:51:04.107533 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8abe6058c24e8c79cc2478285c5bfabafac955c6fc34623efcca33e4ee4284ef\": container with ID starting with 8abe6058c24e8c79cc2478285c5bfabafac955c6fc34623efcca33e4ee4284ef not found: ID does not exist" containerID="8abe6058c24e8c79cc2478285c5bfabafac955c6fc34623efcca33e4ee4284ef" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.107555 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abe6058c24e8c79cc2478285c5bfabafac955c6fc34623efcca33e4ee4284ef"} err="failed to get container status \"8abe6058c24e8c79cc2478285c5bfabafac955c6fc34623efcca33e4ee4284ef\": rpc error: code = NotFound desc = could not find container \"8abe6058c24e8c79cc2478285c5bfabafac955c6fc34623efcca33e4ee4284ef\": container with ID starting with 8abe6058c24e8c79cc2478285c5bfabafac955c6fc34623efcca33e4ee4284ef not found: ID does not exist" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.107570 4482 scope.go:117] "RemoveContainer" containerID="6965b666d02688c9dc593712d60580ef3e94fd94aa2006dd99cec5617ccb85fa" Nov 25 06:51:04 crc kubenswrapper[4482]: E1125 06:51:04.107773 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6965b666d02688c9dc593712d60580ef3e94fd94aa2006dd99cec5617ccb85fa\": container with ID starting with 6965b666d02688c9dc593712d60580ef3e94fd94aa2006dd99cec5617ccb85fa not found: ID does not exist" containerID="6965b666d02688c9dc593712d60580ef3e94fd94aa2006dd99cec5617ccb85fa" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.107795 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6965b666d02688c9dc593712d60580ef3e94fd94aa2006dd99cec5617ccb85fa"} err="failed to get container status \"6965b666d02688c9dc593712d60580ef3e94fd94aa2006dd99cec5617ccb85fa\": rpc error: code = NotFound desc = could not find container \"6965b666d02688c9dc593712d60580ef3e94fd94aa2006dd99cec5617ccb85fa\": container with ID starting with 6965b666d02688c9dc593712d60580ef3e94fd94aa2006dd99cec5617ccb85fa not found: ID does not exist" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.107808 4482 scope.go:117] "RemoveContainer" containerID="5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.115407 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a51867-1870-4ee4-bd5d-66ac6f1e3201-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.115437 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw5cg\" (UniqueName: \"kubernetes.io/projected/8200abb3-4189-4dae-b0d3-9f09c330e278-kube-api-access-hw5cg\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.115448 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.115457 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qcv6\" (UniqueName: \"kubernetes.io/projected/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-kube-api-access-7qcv6\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.115466 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dm5z\" (UniqueName: \"kubernetes.io/projected/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-kube-api-access-7dm5z\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.115473 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvgjl\" (UniqueName: \"kubernetes.io/projected/74a51867-1870-4ee4-bd5d-66ac6f1e3201-kube-api-access-mvgjl\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.115482 4482 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8200abb3-4189-4dae-b0d3-9f09c330e278-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.115490 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f447b1e-5bd0-49f1-9bbd-5277552dbba3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.115497 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.115507 4482 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8200abb3-4189-4dae-b0d3-9f09c330e278-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.115517 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a51867-1870-4ee4-bd5d-66ac6f1e3201-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.119411 4482 scope.go:117] "RemoveContainer" containerID="a33c9b014f9b238f9f0389ec1d64deaafcb9e1d930b286099ab93c0da1782ffb" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.141961 4482 scope.go:117] "RemoveContainer" containerID="8cf49fb90bfc8d3b6d0abd1d00de80b0b81bf5706490e3e659b76eae565c3245" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.154605 4482 scope.go:117] "RemoveContainer" containerID="5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37" Nov 25 06:51:04 crc kubenswrapper[4482]: E1125 06:51:04.155264 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37\": container with ID starting with 5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37 not found: ID does not exist" containerID="5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.155307 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37"} err="failed to get container status \"5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37\": rpc error: code = NotFound desc = could not find container \"5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37\": container with ID starting with 5bf5f1c0ad81a27b69cd314c5cb38fcada44f3b29b34a336b119a7cfbe16fe37 not found: ID does not exist" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.155339 4482 scope.go:117] "RemoveContainer" containerID="a33c9b014f9b238f9f0389ec1d64deaafcb9e1d930b286099ab93c0da1782ffb" Nov 25 06:51:04 crc kubenswrapper[4482]: E1125 06:51:04.155755 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a33c9b014f9b238f9f0389ec1d64deaafcb9e1d930b286099ab93c0da1782ffb\": container with ID starting with a33c9b014f9b238f9f0389ec1d64deaafcb9e1d930b286099ab93c0da1782ffb not found: ID does not exist" containerID="a33c9b014f9b238f9f0389ec1d64deaafcb9e1d930b286099ab93c0da1782ffb" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.155811 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a33c9b014f9b238f9f0389ec1d64deaafcb9e1d930b286099ab93c0da1782ffb"} err="failed to get container status \"a33c9b014f9b238f9f0389ec1d64deaafcb9e1d930b286099ab93c0da1782ffb\": rpc error: code = NotFound desc = could not find container \"a33c9b014f9b238f9f0389ec1d64deaafcb9e1d930b286099ab93c0da1782ffb\": container with ID starting with a33c9b014f9b238f9f0389ec1d64deaafcb9e1d930b286099ab93c0da1782ffb not found: ID does not exist" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.155846 4482 scope.go:117] "RemoveContainer" containerID="8cf49fb90bfc8d3b6d0abd1d00de80b0b81bf5706490e3e659b76eae565c3245" Nov 25 06:51:04 crc kubenswrapper[4482]: E1125 06:51:04.156252 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cf49fb90bfc8d3b6d0abd1d00de80b0b81bf5706490e3e659b76eae565c3245\": container with ID starting with 8cf49fb90bfc8d3b6d0abd1d00de80b0b81bf5706490e3e659b76eae565c3245 not found: ID does not exist" containerID="8cf49fb90bfc8d3b6d0abd1d00de80b0b81bf5706490e3e659b76eae565c3245" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.156290 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cf49fb90bfc8d3b6d0abd1d00de80b0b81bf5706490e3e659b76eae565c3245"} err="failed to get container status \"8cf49fb90bfc8d3b6d0abd1d00de80b0b81bf5706490e3e659b76eae565c3245\": rpc error: code = NotFound desc = could not find container \"8cf49fb90bfc8d3b6d0abd1d00de80b0b81bf5706490e3e659b76eae565c3245\": container with ID starting with 8cf49fb90bfc8d3b6d0abd1d00de80b0b81bf5706490e3e659b76eae565c3245 not found: ID does not exist" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.162815 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7388949f-6c3e-4c11-96b6-b8a7c6ed5765" (UID: "7388949f-6c3e-4c11-96b6-b8a7c6ed5765"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.216128 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7388949f-6c3e-4c11-96b6-b8a7c6ed5765-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.258236 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8mb4t"] Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.321259 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rr27s"] Nov 25 06:51:04 crc kubenswrapper[4482]: I1125 06:51:04.323194 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rr27s"] Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.008249 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qk2s9" event={"ID":"0f447b1e-5bd0-49f1-9bbd-5277552dbba3","Type":"ContainerDied","Data":"a53ffee34b42c415f2a825660b8c9f32fe750e1657c053d815e6d5774852733c"} Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.008307 4482 scope.go:117] "RemoveContainer" containerID="b59e0ce0dd1a528d189b51867deb739f91328360c46886a298634023574593f8" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.008311 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qk2s9" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.011630 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nkrg" event={"ID":"7388949f-6c3e-4c11-96b6-b8a7c6ed5765","Type":"ContainerDied","Data":"ba5b0c4ded9d9b8535ae475fede276c3bf7caaf8fdde04987b082445a61e3013"} Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.011672 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nkrg" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.013209 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" event={"ID":"8200abb3-4189-4dae-b0d3-9f09c330e278","Type":"ContainerDied","Data":"6c699d868ecbf7f581256b341cd2ab5574d13b648e360945bbb20ea7dd967dde"} Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.013398 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2h8cx" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.017611 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" event={"ID":"d6ccf816-7e8c-48db-8ab9-185bb05526f7","Type":"ContainerStarted","Data":"263e20449d722a242f2a6993df40910b034efaf473517a049d9ea0688883bdec"} Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.017653 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" event={"ID":"d6ccf816-7e8c-48db-8ab9-185bb05526f7","Type":"ContainerStarted","Data":"67135e9bdcb55de0578119ed55db706994bddb3caee28e76f6d6345ea0ec7013"} Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.018118 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.021853 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.030638 4482 scope.go:117] "RemoveContainer" containerID="39bb2864dfe41dad3b0916da7f55e8cd0f36e8ba1e010ab2ccc90904a4977c40" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.044387 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8mb4t" podStartSLOduration=2.044355424 podStartE2EDuration="2.044355424s" podCreationTimestamp="2025-11-25 06:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:51:05.038805964 +0000 UTC m=+239.527037213" watchObservedRunningTime="2025-11-25 06:51:05.044355424 +0000 UTC m=+239.532586683" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.054034 4482 scope.go:117] "RemoveContainer" containerID="61b229dabdbe0fc493bf5eb104f7d233ded40cb3877425a2c982f5e8b2d00917" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.079035 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qk2s9"] Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.084029 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qk2s9"] Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.095565 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2h8cx"] Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.099764 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2h8cx"] Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.106604 4482 scope.go:117] "RemoveContainer" containerID="702efcbdd6091501e840aa017b955ce2893fbf5ca09acf70f45dedf31980efb2" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.109947 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nkrg"] Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.112816 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9nkrg"] Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.124260 4482 scope.go:117] "RemoveContainer" containerID="bc1500d34d49702a0c235f8a0cb55b668446f72e7e7e4833d546564cec4e8893" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.135913 4482 scope.go:117] "RemoveContainer" containerID="c986f019340a91630609d5525b902b73f2b606ad7bab3a8c9ed2d482d3bb5288" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.836283 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f447b1e-5bd0-49f1-9bbd-5277552dbba3" path="/var/lib/kubelet/pods/0f447b1e-5bd0-49f1-9bbd-5277552dbba3/volumes" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.837162 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" path="/var/lib/kubelet/pods/36a33d74-c23f-405e-a3c5-6f5a4de71e7a/volumes" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.837730 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" path="/var/lib/kubelet/pods/7388949f-6c3e-4c11-96b6-b8a7c6ed5765/volumes" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.838727 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" path="/var/lib/kubelet/pods/74a51867-1870-4ee4-bd5d-66ac6f1e3201/volumes" Nov 25 06:51:05 crc kubenswrapper[4482]: I1125 06:51:05.839355 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8200abb3-4189-4dae-b0d3-9f09c330e278" path="/var/lib/kubelet/pods/8200abb3-4189-4dae-b0d3-9f09c330e278/volumes" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.041215 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7mz8b"] Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.041953 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" containerName="registry-server" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.041974 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" containerName="registry-server" Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.042005 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" containerName="extract-content" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042014 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" containerName="extract-content" Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.042034 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" containerName="extract-utilities" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042044 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" containerName="extract-utilities" Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.042054 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" containerName="extract-utilities" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042060 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" containerName="extract-utilities" Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.042069 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" containerName="extract-utilities" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042076 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" containerName="extract-utilities" Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.042091 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f447b1e-5bd0-49f1-9bbd-5277552dbba3" containerName="registry-server" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042099 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f447b1e-5bd0-49f1-9bbd-5277552dbba3" containerName="registry-server" Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.042114 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8200abb3-4189-4dae-b0d3-9f09c330e278" containerName="marketplace-operator" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042120 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="8200abb3-4189-4dae-b0d3-9f09c330e278" containerName="marketplace-operator" Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.042134 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" containerName="registry-server" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042145 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" containerName="registry-server" Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.042159 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f447b1e-5bd0-49f1-9bbd-5277552dbba3" containerName="extract-utilities" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042185 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f447b1e-5bd0-49f1-9bbd-5277552dbba3" containerName="extract-utilities" Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.042201 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" containerName="extract-content" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042207 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" containerName="extract-content" Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.042228 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f447b1e-5bd0-49f1-9bbd-5277552dbba3" containerName="extract-content" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042234 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f447b1e-5bd0-49f1-9bbd-5277552dbba3" containerName="extract-content" Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.042242 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" containerName="extract-content" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042249 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" containerName="extract-content" Nov 25 06:51:06 crc kubenswrapper[4482]: E1125 06:51:06.042263 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" containerName="registry-server" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042269 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" containerName="registry-server" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042514 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a51867-1870-4ee4-bd5d-66ac6f1e3201" containerName="registry-server" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042531 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="7388949f-6c3e-4c11-96b6-b8a7c6ed5765" containerName="registry-server" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042540 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a33d74-c23f-405e-a3c5-6f5a4de71e7a" containerName="registry-server" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042552 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="8200abb3-4189-4dae-b0d3-9f09c330e278" containerName="marketplace-operator" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.042565 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f447b1e-5bd0-49f1-9bbd-5277552dbba3" containerName="registry-server" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.044645 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.048450 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.054713 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7mz8b"] Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.140436 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xvds\" (UniqueName: \"kubernetes.io/projected/248eb2bd-f8ed-4376-9ccf-ad47384972eb-kube-api-access-2xvds\") pod \"redhat-operators-7mz8b\" (UID: \"248eb2bd-f8ed-4376-9ccf-ad47384972eb\") " pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.140485 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/248eb2bd-f8ed-4376-9ccf-ad47384972eb-catalog-content\") pod \"redhat-operators-7mz8b\" (UID: \"248eb2bd-f8ed-4376-9ccf-ad47384972eb\") " pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.140605 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/248eb2bd-f8ed-4376-9ccf-ad47384972eb-utilities\") pod \"redhat-operators-7mz8b\" (UID: \"248eb2bd-f8ed-4376-9ccf-ad47384972eb\") " pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.241333 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/248eb2bd-f8ed-4376-9ccf-ad47384972eb-utilities\") pod \"redhat-operators-7mz8b\" (UID: \"248eb2bd-f8ed-4376-9ccf-ad47384972eb\") " pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.241387 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xvds\" (UniqueName: \"kubernetes.io/projected/248eb2bd-f8ed-4376-9ccf-ad47384972eb-kube-api-access-2xvds\") pod \"redhat-operators-7mz8b\" (UID: \"248eb2bd-f8ed-4376-9ccf-ad47384972eb\") " pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.241417 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/248eb2bd-f8ed-4376-9ccf-ad47384972eb-catalog-content\") pod \"redhat-operators-7mz8b\" (UID: \"248eb2bd-f8ed-4376-9ccf-ad47384972eb\") " pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.241753 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/248eb2bd-f8ed-4376-9ccf-ad47384972eb-utilities\") pod \"redhat-operators-7mz8b\" (UID: \"248eb2bd-f8ed-4376-9ccf-ad47384972eb\") " pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.241821 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/248eb2bd-f8ed-4376-9ccf-ad47384972eb-catalog-content\") pod \"redhat-operators-7mz8b\" (UID: \"248eb2bd-f8ed-4376-9ccf-ad47384972eb\") " pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.257282 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xvds\" (UniqueName: \"kubernetes.io/projected/248eb2bd-f8ed-4376-9ccf-ad47384972eb-kube-api-access-2xvds\") pod \"redhat-operators-7mz8b\" (UID: \"248eb2bd-f8ed-4376-9ccf-ad47384972eb\") " pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.371499 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:06 crc kubenswrapper[4482]: I1125 06:51:06.725125 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7mz8b"] Nov 25 06:51:06 crc kubenswrapper[4482]: W1125 06:51:06.732958 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod248eb2bd_f8ed_4376_9ccf_ad47384972eb.slice/crio-9e4f0094765db3134f76f1a27caf57729565126fc8361e0eb87a0072a422c327 WatchSource:0}: Error finding container 9e4f0094765db3134f76f1a27caf57729565126fc8361e0eb87a0072a422c327: Status 404 returned error can't find the container with id 9e4f0094765db3134f76f1a27caf57729565126fc8361e0eb87a0072a422c327 Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.030054 4482 generic.go:334] "Generic (PLEG): container finished" podID="248eb2bd-f8ed-4376-9ccf-ad47384972eb" containerID="9b6478d55bb5a1aa94d9d234098baca1ce3b2ccf3ced24076b45d731507eabb7" exitCode=0 Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.030156 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mz8b" event={"ID":"248eb2bd-f8ed-4376-9ccf-ad47384972eb","Type":"ContainerDied","Data":"9b6478d55bb5a1aa94d9d234098baca1ce3b2ccf3ced24076b45d731507eabb7"} Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.030410 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mz8b" event={"ID":"248eb2bd-f8ed-4376-9ccf-ad47384972eb","Type":"ContainerStarted","Data":"9e4f0094765db3134f76f1a27caf57729565126fc8361e0eb87a0072a422c327"} Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.441290 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8fl5h"] Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.442285 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.444694 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.452772 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngjvj\" (UniqueName: \"kubernetes.io/projected/a409f14f-4cf5-467e-afec-1fd121548e05-kube-api-access-ngjvj\") pod \"certified-operators-8fl5h\" (UID: \"a409f14f-4cf5-467e-afec-1fd121548e05\") " pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.452899 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a409f14f-4cf5-467e-afec-1fd121548e05-utilities\") pod \"certified-operators-8fl5h\" (UID: \"a409f14f-4cf5-467e-afec-1fd121548e05\") " pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.452954 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a409f14f-4cf5-467e-afec-1fd121548e05-catalog-content\") pod \"certified-operators-8fl5h\" (UID: \"a409f14f-4cf5-467e-afec-1fd121548e05\") " pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.456607 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8fl5h"] Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.553703 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngjvj\" (UniqueName: \"kubernetes.io/projected/a409f14f-4cf5-467e-afec-1fd121548e05-kube-api-access-ngjvj\") pod \"certified-operators-8fl5h\" (UID: \"a409f14f-4cf5-467e-afec-1fd121548e05\") " pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.553762 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a409f14f-4cf5-467e-afec-1fd121548e05-utilities\") pod \"certified-operators-8fl5h\" (UID: \"a409f14f-4cf5-467e-afec-1fd121548e05\") " pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.553791 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a409f14f-4cf5-467e-afec-1fd121548e05-catalog-content\") pod \"certified-operators-8fl5h\" (UID: \"a409f14f-4cf5-467e-afec-1fd121548e05\") " pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.554336 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a409f14f-4cf5-467e-afec-1fd121548e05-catalog-content\") pod \"certified-operators-8fl5h\" (UID: \"a409f14f-4cf5-467e-afec-1fd121548e05\") " pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.554658 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a409f14f-4cf5-467e-afec-1fd121548e05-utilities\") pod \"certified-operators-8fl5h\" (UID: \"a409f14f-4cf5-467e-afec-1fd121548e05\") " pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.571368 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngjvj\" (UniqueName: \"kubernetes.io/projected/a409f14f-4cf5-467e-afec-1fd121548e05-kube-api-access-ngjvj\") pod \"certified-operators-8fl5h\" (UID: \"a409f14f-4cf5-467e-afec-1fd121548e05\") " pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.755284 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:07 crc kubenswrapper[4482]: I1125 06:51:07.909663 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8fl5h"] Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.037630 4482 generic.go:334] "Generic (PLEG): container finished" podID="a409f14f-4cf5-467e-afec-1fd121548e05" containerID="2e9a8c82c10f90841418d044f0389365f145a6e8417ab992c268e566e5147e56" exitCode=0 Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.037670 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fl5h" event={"ID":"a409f14f-4cf5-467e-afec-1fd121548e05","Type":"ContainerDied","Data":"2e9a8c82c10f90841418d044f0389365f145a6e8417ab992c268e566e5147e56"} Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.037872 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fl5h" event={"ID":"a409f14f-4cf5-467e-afec-1fd121548e05","Type":"ContainerStarted","Data":"7238740badbe682b154295542787e238fdf9823452227557e0eb5262881ef791"} Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.439983 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-69xjj"] Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.440925 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.446005 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.450002 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69xjj"] Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.565154 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9940aeba-b78c-4271-9748-02d3200887f8-utilities\") pod \"community-operators-69xjj\" (UID: \"9940aeba-b78c-4271-9748-02d3200887f8\") " pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.565325 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9940aeba-b78c-4271-9748-02d3200887f8-catalog-content\") pod \"community-operators-69xjj\" (UID: \"9940aeba-b78c-4271-9748-02d3200887f8\") " pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.565361 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8t7g\" (UniqueName: \"kubernetes.io/projected/9940aeba-b78c-4271-9748-02d3200887f8-kube-api-access-h8t7g\") pod \"community-operators-69xjj\" (UID: \"9940aeba-b78c-4271-9748-02d3200887f8\") " pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.666728 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9940aeba-b78c-4271-9748-02d3200887f8-catalog-content\") pod \"community-operators-69xjj\" (UID: \"9940aeba-b78c-4271-9748-02d3200887f8\") " pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.667000 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8t7g\" (UniqueName: \"kubernetes.io/projected/9940aeba-b78c-4271-9748-02d3200887f8-kube-api-access-h8t7g\") pod \"community-operators-69xjj\" (UID: \"9940aeba-b78c-4271-9748-02d3200887f8\") " pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.667068 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9940aeba-b78c-4271-9748-02d3200887f8-utilities\") pod \"community-operators-69xjj\" (UID: \"9940aeba-b78c-4271-9748-02d3200887f8\") " pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.667470 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9940aeba-b78c-4271-9748-02d3200887f8-utilities\") pod \"community-operators-69xjj\" (UID: \"9940aeba-b78c-4271-9748-02d3200887f8\") " pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.667691 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9940aeba-b78c-4271-9748-02d3200887f8-catalog-content\") pod \"community-operators-69xjj\" (UID: \"9940aeba-b78c-4271-9748-02d3200887f8\") " pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.690290 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8t7g\" (UniqueName: \"kubernetes.io/projected/9940aeba-b78c-4271-9748-02d3200887f8-kube-api-access-h8t7g\") pod \"community-operators-69xjj\" (UID: \"9940aeba-b78c-4271-9748-02d3200887f8\") " pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:08 crc kubenswrapper[4482]: I1125 06:51:08.753679 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.048895 4482 generic.go:334] "Generic (PLEG): container finished" podID="a409f14f-4cf5-467e-afec-1fd121548e05" containerID="0a288784bff3a6795669c69c85854b1f0d1d0ae43e0fc440678468442d1f8e99" exitCode=0 Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.049070 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fl5h" event={"ID":"a409f14f-4cf5-467e-afec-1fd121548e05","Type":"ContainerDied","Data":"0a288784bff3a6795669c69c85854b1f0d1d0ae43e0fc440678468442d1f8e99"} Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.051877 4482 generic.go:334] "Generic (PLEG): container finished" podID="248eb2bd-f8ed-4376-9ccf-ad47384972eb" containerID="321fc5bdadd51eab370351974ee92884eefce888c74b2b070ca1576e83e1c39c" exitCode=0 Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.051928 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mz8b" event={"ID":"248eb2bd-f8ed-4376-9ccf-ad47384972eb","Type":"ContainerDied","Data":"321fc5bdadd51eab370351974ee92884eefce888c74b2b070ca1576e83e1c39c"} Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.123772 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69xjj"] Nov 25 06:51:09 crc kubenswrapper[4482]: W1125 06:51:09.129977 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9940aeba_b78c_4271_9748_02d3200887f8.slice/crio-55741f332f6bc55d9504fe8811e198019edf3fe82da886aed437e861d00ec396 WatchSource:0}: Error finding container 55741f332f6bc55d9504fe8811e198019edf3fe82da886aed437e861d00ec396: Status 404 returned error can't find the container with id 55741f332f6bc55d9504fe8811e198019edf3fe82da886aed437e861d00ec396 Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.843482 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-svfb9"] Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.844995 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.849755 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.890582 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a83cb77-fe06-42b3-9d0d-998b65e34604-utilities\") pod \"redhat-marketplace-svfb9\" (UID: \"2a83cb77-fe06-42b3-9d0d-998b65e34604\") " pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.890800 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r965c\" (UniqueName: \"kubernetes.io/projected/2a83cb77-fe06-42b3-9d0d-998b65e34604-kube-api-access-r965c\") pod \"redhat-marketplace-svfb9\" (UID: \"2a83cb77-fe06-42b3-9d0d-998b65e34604\") " pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.890922 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a83cb77-fe06-42b3-9d0d-998b65e34604-catalog-content\") pod \"redhat-marketplace-svfb9\" (UID: \"2a83cb77-fe06-42b3-9d0d-998b65e34604\") " pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.897741 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svfb9"] Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.991930 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a83cb77-fe06-42b3-9d0d-998b65e34604-utilities\") pod \"redhat-marketplace-svfb9\" (UID: \"2a83cb77-fe06-42b3-9d0d-998b65e34604\") " pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.992009 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r965c\" (UniqueName: \"kubernetes.io/projected/2a83cb77-fe06-42b3-9d0d-998b65e34604-kube-api-access-r965c\") pod \"redhat-marketplace-svfb9\" (UID: \"2a83cb77-fe06-42b3-9d0d-998b65e34604\") " pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.992044 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a83cb77-fe06-42b3-9d0d-998b65e34604-catalog-content\") pod \"redhat-marketplace-svfb9\" (UID: \"2a83cb77-fe06-42b3-9d0d-998b65e34604\") " pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.992971 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a83cb77-fe06-42b3-9d0d-998b65e34604-utilities\") pod \"redhat-marketplace-svfb9\" (UID: \"2a83cb77-fe06-42b3-9d0d-998b65e34604\") " pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:09 crc kubenswrapper[4482]: I1125 06:51:09.992977 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a83cb77-fe06-42b3-9d0d-998b65e34604-catalog-content\") pod \"redhat-marketplace-svfb9\" (UID: \"2a83cb77-fe06-42b3-9d0d-998b65e34604\") " pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:10 crc kubenswrapper[4482]: I1125 06:51:10.013983 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r965c\" (UniqueName: \"kubernetes.io/projected/2a83cb77-fe06-42b3-9d0d-998b65e34604-kube-api-access-r965c\") pod \"redhat-marketplace-svfb9\" (UID: \"2a83cb77-fe06-42b3-9d0d-998b65e34604\") " pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:10 crc kubenswrapper[4482]: I1125 06:51:10.057917 4482 generic.go:334] "Generic (PLEG): container finished" podID="9940aeba-b78c-4271-9748-02d3200887f8" containerID="bcb197619c9355d1338edb74f18faa7513b378fcda97decc00bd993b86d48c88" exitCode=0 Nov 25 06:51:10 crc kubenswrapper[4482]: I1125 06:51:10.058039 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69xjj" event={"ID":"9940aeba-b78c-4271-9748-02d3200887f8","Type":"ContainerDied","Data":"bcb197619c9355d1338edb74f18faa7513b378fcda97decc00bd993b86d48c88"} Nov 25 06:51:10 crc kubenswrapper[4482]: I1125 06:51:10.058119 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69xjj" event={"ID":"9940aeba-b78c-4271-9748-02d3200887f8","Type":"ContainerStarted","Data":"55741f332f6bc55d9504fe8811e198019edf3fe82da886aed437e861d00ec396"} Nov 25 06:51:10 crc kubenswrapper[4482]: I1125 06:51:10.062089 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fl5h" event={"ID":"a409f14f-4cf5-467e-afec-1fd121548e05","Type":"ContainerStarted","Data":"b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27"} Nov 25 06:51:10 crc kubenswrapper[4482]: I1125 06:51:10.069095 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7mz8b" event={"ID":"248eb2bd-f8ed-4376-9ccf-ad47384972eb","Type":"ContainerStarted","Data":"7dae517a62fbd9929e4ea2cd98bc0b98b07be3136d26c31c5cc65891b05cd8db"} Nov 25 06:51:10 crc kubenswrapper[4482]: I1125 06:51:10.094066 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8fl5h" podStartSLOduration=1.576396576 podStartE2EDuration="3.094041466s" podCreationTimestamp="2025-11-25 06:51:07 +0000 UTC" firstStartedPulling="2025-11-25 06:51:08.038809782 +0000 UTC m=+242.527041042" lastFinishedPulling="2025-11-25 06:51:09.556454673 +0000 UTC m=+244.044685932" observedRunningTime="2025-11-25 06:51:10.093372043 +0000 UTC m=+244.581603302" watchObservedRunningTime="2025-11-25 06:51:10.094041466 +0000 UTC m=+244.582272726" Nov 25 06:51:10 crc kubenswrapper[4482]: I1125 06:51:10.108449 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7mz8b" podStartSLOduration=1.6459066409999998 podStartE2EDuration="4.108430468s" podCreationTimestamp="2025-11-25 06:51:06 +0000 UTC" firstStartedPulling="2025-11-25 06:51:07.032076079 +0000 UTC m=+241.520307338" lastFinishedPulling="2025-11-25 06:51:09.494599906 +0000 UTC m=+243.982831165" observedRunningTime="2025-11-25 06:51:10.105765248 +0000 UTC m=+244.593996507" watchObservedRunningTime="2025-11-25 06:51:10.108430468 +0000 UTC m=+244.596661727" Nov 25 06:51:10 crc kubenswrapper[4482]: I1125 06:51:10.161559 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:10 crc kubenswrapper[4482]: I1125 06:51:10.557619 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svfb9"] Nov 25 06:51:11 crc kubenswrapper[4482]: I1125 06:51:11.076739 4482 generic.go:334] "Generic (PLEG): container finished" podID="2a83cb77-fe06-42b3-9d0d-998b65e34604" containerID="28889e2192223829d7f2758023bab52192eff2855c50dcdc22ce2c989d896625" exitCode=0 Nov 25 06:51:11 crc kubenswrapper[4482]: I1125 06:51:11.076847 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svfb9" event={"ID":"2a83cb77-fe06-42b3-9d0d-998b65e34604","Type":"ContainerDied","Data":"28889e2192223829d7f2758023bab52192eff2855c50dcdc22ce2c989d896625"} Nov 25 06:51:11 crc kubenswrapper[4482]: I1125 06:51:11.077221 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svfb9" event={"ID":"2a83cb77-fe06-42b3-9d0d-998b65e34604","Type":"ContainerStarted","Data":"c16fd67622dc81e8036ca1af92ca2ec4404a363df0c8da1faeadc5b480b0b3c0"} Nov 25 06:51:11 crc kubenswrapper[4482]: I1125 06:51:11.080579 4482 generic.go:334] "Generic (PLEG): container finished" podID="9940aeba-b78c-4271-9748-02d3200887f8" containerID="111dae44ca1f235f4c7530176408c328e38b94c6e9d2539d5112c1fa358e2d7a" exitCode=0 Nov 25 06:51:11 crc kubenswrapper[4482]: I1125 06:51:11.080694 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69xjj" event={"ID":"9940aeba-b78c-4271-9748-02d3200887f8","Type":"ContainerDied","Data":"111dae44ca1f235f4c7530176408c328e38b94c6e9d2539d5112c1fa358e2d7a"} Nov 25 06:51:12 crc kubenswrapper[4482]: I1125 06:51:12.087079 4482 generic.go:334] "Generic (PLEG): container finished" podID="2a83cb77-fe06-42b3-9d0d-998b65e34604" containerID="955ff5e95ff10dd80f5a7573eea68884a97da929e5e2ba81a8cfb3f298eeb6eb" exitCode=0 Nov 25 06:51:12 crc kubenswrapper[4482]: I1125 06:51:12.087206 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svfb9" event={"ID":"2a83cb77-fe06-42b3-9d0d-998b65e34604","Type":"ContainerDied","Data":"955ff5e95ff10dd80f5a7573eea68884a97da929e5e2ba81a8cfb3f298eeb6eb"} Nov 25 06:51:12 crc kubenswrapper[4482]: I1125 06:51:12.089158 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69xjj" event={"ID":"9940aeba-b78c-4271-9748-02d3200887f8","Type":"ContainerStarted","Data":"a99c225b604127ea1387c6334ca3c19e91a0bb06220f94dd01b9487f8235c0b7"} Nov 25 06:51:13 crc kubenswrapper[4482]: I1125 06:51:13.095527 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svfb9" event={"ID":"2a83cb77-fe06-42b3-9d0d-998b65e34604","Type":"ContainerStarted","Data":"b367ba77e554c7b8a429fb1dfb0fd007c9312fb273e7e99405a1edd7a2cdcedf"} Nov 25 06:51:13 crc kubenswrapper[4482]: I1125 06:51:13.113316 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-69xjj" podStartSLOduration=3.586242897 podStartE2EDuration="5.113299197s" podCreationTimestamp="2025-11-25 06:51:08 +0000 UTC" firstStartedPulling="2025-11-25 06:51:10.059604275 +0000 UTC m=+244.547835534" lastFinishedPulling="2025-11-25 06:51:11.586660575 +0000 UTC m=+246.074891834" observedRunningTime="2025-11-25 06:51:12.127663594 +0000 UTC m=+246.615894854" watchObservedRunningTime="2025-11-25 06:51:13.113299197 +0000 UTC m=+247.601530456" Nov 25 06:51:16 crc kubenswrapper[4482]: I1125 06:51:16.372850 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:16 crc kubenswrapper[4482]: I1125 06:51:16.373152 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:16 crc kubenswrapper[4482]: I1125 06:51:16.409470 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:16 crc kubenswrapper[4482]: I1125 06:51:16.426503 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-svfb9" podStartSLOduration=5.922830995 podStartE2EDuration="7.426485938s" podCreationTimestamp="2025-11-25 06:51:09 +0000 UTC" firstStartedPulling="2025-11-25 06:51:11.079084952 +0000 UTC m=+245.567316210" lastFinishedPulling="2025-11-25 06:51:12.582739894 +0000 UTC m=+247.070971153" observedRunningTime="2025-11-25 06:51:13.116105341 +0000 UTC m=+247.604336601" watchObservedRunningTime="2025-11-25 06:51:16.426485938 +0000 UTC m=+250.914717197" Nov 25 06:51:17 crc kubenswrapper[4482]: I1125 06:51:17.149624 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7mz8b" Nov 25 06:51:17 crc kubenswrapper[4482]: I1125 06:51:17.755777 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:17 crc kubenswrapper[4482]: I1125 06:51:17.755865 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:17 crc kubenswrapper[4482]: I1125 06:51:17.786985 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:18 crc kubenswrapper[4482]: I1125 06:51:18.165245 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 06:51:18 crc kubenswrapper[4482]: I1125 06:51:18.754768 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:18 crc kubenswrapper[4482]: I1125 06:51:18.754972 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:18 crc kubenswrapper[4482]: I1125 06:51:18.786461 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:19 crc kubenswrapper[4482]: I1125 06:51:19.155856 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-69xjj" Nov 25 06:51:20 crc kubenswrapper[4482]: I1125 06:51:20.162587 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:20 crc kubenswrapper[4482]: I1125 06:51:20.162648 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:20 crc kubenswrapper[4482]: I1125 06:51:20.208283 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:51:21 crc kubenswrapper[4482]: I1125 06:51:21.164021 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-svfb9" Nov 25 06:52:05 crc kubenswrapper[4482]: I1125 06:52:05.718278 4482 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 25 06:52:39 crc kubenswrapper[4482]: I1125 06:52:39.118404 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 06:52:39 crc kubenswrapper[4482]: I1125 06:52:39.118811 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 06:53:09 crc kubenswrapper[4482]: I1125 06:53:09.117266 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 06:53:09 crc kubenswrapper[4482]: I1125 06:53:09.117631 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.508607 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2vq9t"] Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.509567 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.547691 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2vq9t"] Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.628461 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.628513 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1788e4b-a585-4f37-b8c1-6693fb2d2073-trusted-ca\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.628544 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f1788e4b-a585-4f37-b8c1-6693fb2d2073-registry-tls\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.628710 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f1788e4b-a585-4f37-b8c1-6693fb2d2073-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.628869 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx4ps\" (UniqueName: \"kubernetes.io/projected/f1788e4b-a585-4f37-b8c1-6693fb2d2073-kube-api-access-vx4ps\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.628945 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1788e4b-a585-4f37-b8c1-6693fb2d2073-bound-sa-token\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.628972 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f1788e4b-a585-4f37-b8c1-6693fb2d2073-registry-certificates\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.629263 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f1788e4b-a585-4f37-b8c1-6693fb2d2073-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.644729 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.730205 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx4ps\" (UniqueName: \"kubernetes.io/projected/f1788e4b-a585-4f37-b8c1-6693fb2d2073-kube-api-access-vx4ps\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.730247 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1788e4b-a585-4f37-b8c1-6693fb2d2073-bound-sa-token\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.730270 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f1788e4b-a585-4f37-b8c1-6693fb2d2073-registry-certificates\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.730299 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f1788e4b-a585-4f37-b8c1-6693fb2d2073-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.730329 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1788e4b-a585-4f37-b8c1-6693fb2d2073-trusted-ca\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.730347 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f1788e4b-a585-4f37-b8c1-6693fb2d2073-registry-tls\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.730386 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f1788e4b-a585-4f37-b8c1-6693fb2d2073-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.731380 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f1788e4b-a585-4f37-b8c1-6693fb2d2073-registry-certificates\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.731770 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1788e4b-a585-4f37-b8c1-6693fb2d2073-trusted-ca\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.731772 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f1788e4b-a585-4f37-b8c1-6693fb2d2073-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.735353 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f1788e4b-a585-4f37-b8c1-6693fb2d2073-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.735372 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f1788e4b-a585-4f37-b8c1-6693fb2d2073-registry-tls\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.742959 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1788e4b-a585-4f37-b8c1-6693fb2d2073-bound-sa-token\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.743116 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx4ps\" (UniqueName: \"kubernetes.io/projected/f1788e4b-a585-4f37-b8c1-6693fb2d2073-kube-api-access-vx4ps\") pod \"image-registry-66df7c8f76-2vq9t\" (UID: \"f1788e4b-a585-4f37-b8c1-6693fb2d2073\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:29 crc kubenswrapper[4482]: I1125 06:53:29.822356 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:30 crc kubenswrapper[4482]: I1125 06:53:30.159254 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2vq9t"] Nov 25 06:53:30 crc kubenswrapper[4482]: I1125 06:53:30.635919 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" event={"ID":"f1788e4b-a585-4f37-b8c1-6693fb2d2073","Type":"ContainerStarted","Data":"add1b7d2c97389735258c1a31fe61863e97147076697748cc5f6665393ba16f5"} Nov 25 06:53:30 crc kubenswrapper[4482]: I1125 06:53:30.636114 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" event={"ID":"f1788e4b-a585-4f37-b8c1-6693fb2d2073","Type":"ContainerStarted","Data":"f2e3341080586eebf21ca3ebca94d05cc64d822dc94116828dbb5d8ccac8b428"} Nov 25 06:53:30 crc kubenswrapper[4482]: I1125 06:53:30.636138 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:30 crc kubenswrapper[4482]: I1125 06:53:30.649056 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" podStartSLOduration=1.6490437039999999 podStartE2EDuration="1.649043704s" podCreationTimestamp="2025-11-25 06:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:53:30.64804457 +0000 UTC m=+385.136275829" watchObservedRunningTime="2025-11-25 06:53:30.649043704 +0000 UTC m=+385.137274953" Nov 25 06:53:39 crc kubenswrapper[4482]: I1125 06:53:39.117278 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 06:53:39 crc kubenswrapper[4482]: I1125 06:53:39.118268 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 06:53:39 crc kubenswrapper[4482]: I1125 06:53:39.118340 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:53:39 crc kubenswrapper[4482]: I1125 06:53:39.118919 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9556eecd99aaa627f2f8338b1f2e2766518897cc04a75034690120a70e07dff"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 06:53:39 crc kubenswrapper[4482]: I1125 06:53:39.118980 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://b9556eecd99aaa627f2f8338b1f2e2766518897cc04a75034690120a70e07dff" gracePeriod=600 Nov 25 06:53:39 crc kubenswrapper[4482]: E1125 06:53:39.147154 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a7d6ef_c931_4f15_893b_c9436d6de1f5.slice/crio-b9556eecd99aaa627f2f8338b1f2e2766518897cc04a75034690120a70e07dff.scope\": RecentStats: unable to find data in memory cache]" Nov 25 06:53:39 crc kubenswrapper[4482]: I1125 06:53:39.675032 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="b9556eecd99aaa627f2f8338b1f2e2766518897cc04a75034690120a70e07dff" exitCode=0 Nov 25 06:53:39 crc kubenswrapper[4482]: I1125 06:53:39.675101 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"b9556eecd99aaa627f2f8338b1f2e2766518897cc04a75034690120a70e07dff"} Nov 25 06:53:39 crc kubenswrapper[4482]: I1125 06:53:39.675368 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"d84812a555ffdedafcf55f0c474a9703c65d1fb93d154179be65ddf6b69c96ac"} Nov 25 06:53:39 crc kubenswrapper[4482]: I1125 06:53:39.675392 4482 scope.go:117] "RemoveContainer" containerID="33bf701b8d926c62e9109ffa0505537dd7bf5509804c12773892f616a8365742" Nov 25 06:53:49 crc kubenswrapper[4482]: I1125 06:53:49.826321 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2vq9t" Nov 25 06:53:49 crc kubenswrapper[4482]: I1125 06:53:49.865199 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fbpdk"] Nov 25 06:54:15 crc kubenswrapper[4482]: I1125 06:54:15.743750 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" podUID="7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" containerName="registry" containerID="cri-o://ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91" gracePeriod=30 Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.019103 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.072749 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-installation-pull-secrets\") pod \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.072959 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-bound-sa-token\") pod \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.073260 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rll2z\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-kube-api-access-rll2z\") pod \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.073452 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-ca-trust-extracted\") pod \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.073502 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-registry-tls\") pod \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.073588 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-trusted-ca\") pod \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.073626 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-registry-certificates\") pod \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.073784 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\" (UID: \"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146\") " Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.075211 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.077580 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.081610 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.081660 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-kube-api-access-rll2z" (OuterVolumeSpecName: "kube-api-access-rll2z") pod "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146"). InnerVolumeSpecName "kube-api-access-rll2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.081780 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.081811 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.081978 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.087969 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" (UID: "7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.175146 4482 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.175203 4482 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.175213 4482 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.175222 4482 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.175236 4482 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.175244 4482 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.175253 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rll2z\" (UniqueName: \"kubernetes.io/projected/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146-kube-api-access-rll2z\") on node \"crc\" DevicePath \"\"" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.832359 4482 generic.go:334] "Generic (PLEG): container finished" podID="7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" containerID="ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91" exitCode=0 Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.832398 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" event={"ID":"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146","Type":"ContainerDied","Data":"ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91"} Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.832429 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" event={"ID":"7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146","Type":"ContainerDied","Data":"878db53dba431b2009cb369155f64a1b63806227297ee2f4e5d41e640fc64cc2"} Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.832447 4482 scope.go:117] "RemoveContainer" containerID="ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.832536 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fbpdk" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.844865 4482 scope.go:117] "RemoveContainer" containerID="ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91" Nov 25 06:54:16 crc kubenswrapper[4482]: E1125 06:54:16.845186 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91\": container with ID starting with ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91 not found: ID does not exist" containerID="ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.845224 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91"} err="failed to get container status \"ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91\": rpc error: code = NotFound desc = could not find container \"ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91\": container with ID starting with ae7f7d644aef9a5be5764017667f84da40ef432f2107323933977bdeb1b43d91 not found: ID does not exist" Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.853870 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fbpdk"] Nov 25 06:54:16 crc kubenswrapper[4482]: I1125 06:54:16.859450 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fbpdk"] Nov 25 06:54:17 crc kubenswrapper[4482]: I1125 06:54:17.836204 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" path="/var/lib/kubelet/pods/7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146/volumes" Nov 25 06:55:39 crc kubenswrapper[4482]: I1125 06:55:39.118218 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 06:55:39 crc kubenswrapper[4482]: I1125 06:55:39.118983 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.672642 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-bcpsp"] Nov 25 06:55:54 crc kubenswrapper[4482]: E1125 06:55:54.673999 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" containerName="registry" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.674079 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" containerName="registry" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.674242 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a4dbd3d-c7bc-43a6-bb15-1d17a0b14146" containerName="registry" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.674641 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-bcpsp" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.680227 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-bcpsp"] Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.681769 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.682004 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.682198 4482 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2sx5j" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.690138 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rvzmp"] Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.690786 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-rvzmp" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.691945 4482 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-8v6ms" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.703758 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-pt8lh"] Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.704421 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-pt8lh" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.705813 4482 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-qrwf8" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.721327 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-pt8lh"] Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.738582 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rvzmp"] Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.750718 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmr58\" (UniqueName: \"kubernetes.io/projected/8b848a1b-214e-49da-ab4b-5eb3150fc85f-kube-api-access-bmr58\") pod \"cert-manager-cainjector-7f985d654d-bcpsp\" (UID: \"8b848a1b-214e-49da-ab4b-5eb3150fc85f\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-bcpsp" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.852344 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdwkk\" (UniqueName: \"kubernetes.io/projected/724fe0c2-5ef8-48a9-8c39-c73b17e6fef2-kube-api-access-wdwkk\") pod \"cert-manager-5b446d88c5-rvzmp\" (UID: \"724fe0c2-5ef8-48a9-8c39-c73b17e6fef2\") " pod="cert-manager/cert-manager-5b446d88c5-rvzmp" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.852387 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwpn\" (UniqueName: \"kubernetes.io/projected/7b5ad016-8967-4c47-9db4-6adce279ff9d-kube-api-access-fjwpn\") pod \"cert-manager-webhook-5655c58dd6-pt8lh\" (UID: \"7b5ad016-8967-4c47-9db4-6adce279ff9d\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-pt8lh" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.852481 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmr58\" (UniqueName: \"kubernetes.io/projected/8b848a1b-214e-49da-ab4b-5eb3150fc85f-kube-api-access-bmr58\") pod \"cert-manager-cainjector-7f985d654d-bcpsp\" (UID: \"8b848a1b-214e-49da-ab4b-5eb3150fc85f\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-bcpsp" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.867927 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmr58\" (UniqueName: \"kubernetes.io/projected/8b848a1b-214e-49da-ab4b-5eb3150fc85f-kube-api-access-bmr58\") pod \"cert-manager-cainjector-7f985d654d-bcpsp\" (UID: \"8b848a1b-214e-49da-ab4b-5eb3150fc85f\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-bcpsp" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.953729 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdwkk\" (UniqueName: \"kubernetes.io/projected/724fe0c2-5ef8-48a9-8c39-c73b17e6fef2-kube-api-access-wdwkk\") pod \"cert-manager-5b446d88c5-rvzmp\" (UID: \"724fe0c2-5ef8-48a9-8c39-c73b17e6fef2\") " pod="cert-manager/cert-manager-5b446d88c5-rvzmp" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.953780 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjwpn\" (UniqueName: \"kubernetes.io/projected/7b5ad016-8967-4c47-9db4-6adce279ff9d-kube-api-access-fjwpn\") pod \"cert-manager-webhook-5655c58dd6-pt8lh\" (UID: \"7b5ad016-8967-4c47-9db4-6adce279ff9d\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-pt8lh" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.967981 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjwpn\" (UniqueName: \"kubernetes.io/projected/7b5ad016-8967-4c47-9db4-6adce279ff9d-kube-api-access-fjwpn\") pod \"cert-manager-webhook-5655c58dd6-pt8lh\" (UID: \"7b5ad016-8967-4c47-9db4-6adce279ff9d\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-pt8lh" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.968341 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdwkk\" (UniqueName: \"kubernetes.io/projected/724fe0c2-5ef8-48a9-8c39-c73b17e6fef2-kube-api-access-wdwkk\") pod \"cert-manager-5b446d88c5-rvzmp\" (UID: \"724fe0c2-5ef8-48a9-8c39-c73b17e6fef2\") " pod="cert-manager/cert-manager-5b446d88c5-rvzmp" Nov 25 06:55:54 crc kubenswrapper[4482]: I1125 06:55:54.986745 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-bcpsp" Nov 25 06:55:55 crc kubenswrapper[4482]: I1125 06:55:55.000249 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-rvzmp" Nov 25 06:55:55 crc kubenswrapper[4482]: I1125 06:55:55.017409 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-pt8lh" Nov 25 06:55:55 crc kubenswrapper[4482]: I1125 06:55:55.351842 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-bcpsp"] Nov 25 06:55:55 crc kubenswrapper[4482]: I1125 06:55:55.359867 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 06:55:55 crc kubenswrapper[4482]: I1125 06:55:55.385014 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rvzmp"] Nov 25 06:55:55 crc kubenswrapper[4482]: W1125 06:55:55.388435 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod724fe0c2_5ef8_48a9_8c39_c73b17e6fef2.slice/crio-6b5e0a850ad0e20dc911b3e45c758b9932eb5916a5ae2a73e4eb9827a4f94b50 WatchSource:0}: Error finding container 6b5e0a850ad0e20dc911b3e45c758b9932eb5916a5ae2a73e4eb9827a4f94b50: Status 404 returned error can't find the container with id 6b5e0a850ad0e20dc911b3e45c758b9932eb5916a5ae2a73e4eb9827a4f94b50 Nov 25 06:55:55 crc kubenswrapper[4482]: I1125 06:55:55.413731 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-pt8lh"] Nov 25 06:55:55 crc kubenswrapper[4482]: W1125 06:55:55.417031 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b5ad016_8967_4c47_9db4_6adce279ff9d.slice/crio-0f5d2b2cd26dc303acf243ccf0b5bda3deaa18fb2adf27f471fdea68be1159d3 WatchSource:0}: Error finding container 0f5d2b2cd26dc303acf243ccf0b5bda3deaa18fb2adf27f471fdea68be1159d3: Status 404 returned error can't find the container with id 0f5d2b2cd26dc303acf243ccf0b5bda3deaa18fb2adf27f471fdea68be1159d3 Nov 25 06:55:56 crc kubenswrapper[4482]: I1125 06:55:56.263864 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-bcpsp" event={"ID":"8b848a1b-214e-49da-ab4b-5eb3150fc85f","Type":"ContainerStarted","Data":"9c1f9d624a176af3f113020ca7a6ebea66e1c7643c2da2a20a98d91a0731ee67"} Nov 25 06:55:56 crc kubenswrapper[4482]: I1125 06:55:56.264974 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rvzmp" event={"ID":"724fe0c2-5ef8-48a9-8c39-c73b17e6fef2","Type":"ContainerStarted","Data":"6b5e0a850ad0e20dc911b3e45c758b9932eb5916a5ae2a73e4eb9827a4f94b50"} Nov 25 06:55:56 crc kubenswrapper[4482]: I1125 06:55:56.265688 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-pt8lh" event={"ID":"7b5ad016-8967-4c47-9db4-6adce279ff9d","Type":"ContainerStarted","Data":"0f5d2b2cd26dc303acf243ccf0b5bda3deaa18fb2adf27f471fdea68be1159d3"} Nov 25 06:55:58 crc kubenswrapper[4482]: I1125 06:55:58.275359 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-bcpsp" event={"ID":"8b848a1b-214e-49da-ab4b-5eb3150fc85f","Type":"ContainerStarted","Data":"330890bab35de55892d48967cdb2b785d93c19b4774bf25ec16d751942b078e5"} Nov 25 06:55:58 crc kubenswrapper[4482]: I1125 06:55:58.278056 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rvzmp" event={"ID":"724fe0c2-5ef8-48a9-8c39-c73b17e6fef2","Type":"ContainerStarted","Data":"50f859be50538b1dcef72ee3c5e778001e4ded9a483d340971535d73f32b7e0d"} Nov 25 06:55:58 crc kubenswrapper[4482]: I1125 06:55:58.279819 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-pt8lh" event={"ID":"7b5ad016-8967-4c47-9db4-6adce279ff9d","Type":"ContainerStarted","Data":"e5eef1c9c88d198221d6ec133e51d81ef7074f5554a9d0249c1ac4e89ffdbf7f"} Nov 25 06:55:58 crc kubenswrapper[4482]: I1125 06:55:58.279984 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-pt8lh" Nov 25 06:55:58 crc kubenswrapper[4482]: I1125 06:55:58.301237 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-pt8lh" podStartSLOduration=1.7375730759999999 podStartE2EDuration="4.301224406s" podCreationTimestamp="2025-11-25 06:55:54 +0000 UTC" firstStartedPulling="2025-11-25 06:55:55.418619571 +0000 UTC m=+529.906850830" lastFinishedPulling="2025-11-25 06:55:57.9822709 +0000 UTC m=+532.470502160" observedRunningTime="2025-11-25 06:55:58.299285559 +0000 UTC m=+532.787516818" watchObservedRunningTime="2025-11-25 06:55:58.301224406 +0000 UTC m=+532.789455665" Nov 25 06:55:58 crc kubenswrapper[4482]: I1125 06:55:58.302099 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-bcpsp" podStartSLOduration=1.7427157439999998 podStartE2EDuration="4.302094957s" podCreationTimestamp="2025-11-25 06:55:54 +0000 UTC" firstStartedPulling="2025-11-25 06:55:55.359645377 +0000 UTC m=+529.847876637" lastFinishedPulling="2025-11-25 06:55:57.91902459 +0000 UTC m=+532.407255850" observedRunningTime="2025-11-25 06:55:58.286752595 +0000 UTC m=+532.774983854" watchObservedRunningTime="2025-11-25 06:55:58.302094957 +0000 UTC m=+532.790326216" Nov 25 06:55:58 crc kubenswrapper[4482]: I1125 06:55:58.310836 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-rvzmp" podStartSLOduration=1.7416668450000001 podStartE2EDuration="4.310827695s" podCreationTimestamp="2025-11-25 06:55:54 +0000 UTC" firstStartedPulling="2025-11-25 06:55:55.389843586 +0000 UTC m=+529.878074834" lastFinishedPulling="2025-11-25 06:55:57.959004425 +0000 UTC m=+532.447235684" observedRunningTime="2025-11-25 06:55:58.309898844 +0000 UTC m=+532.798130103" watchObservedRunningTime="2025-11-25 06:55:58.310827695 +0000 UTC m=+532.799058955" Nov 25 06:56:05 crc kubenswrapper[4482]: I1125 06:56:05.020242 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-pt8lh" Nov 25 06:56:05 crc kubenswrapper[4482]: I1125 06:56:05.980149 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c58dr"] Nov 25 06:56:05 crc kubenswrapper[4482]: I1125 06:56:05.980504 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovn-controller" containerID="cri-o://7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4" gracePeriod=30 Nov 25 06:56:05 crc kubenswrapper[4482]: I1125 06:56:05.980605 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovn-acl-logging" containerID="cri-o://5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388" gracePeriod=30 Nov 25 06:56:05 crc kubenswrapper[4482]: I1125 06:56:05.980585 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="kube-rbac-proxy-node" containerID="cri-o://e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974" gracePeriod=30 Nov 25 06:56:05 crc kubenswrapper[4482]: I1125 06:56:05.980715 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120" gracePeriod=30 Nov 25 06:56:05 crc kubenswrapper[4482]: I1125 06:56:05.980844 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="nbdb" containerID="cri-o://2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b" gracePeriod=30 Nov 25 06:56:05 crc kubenswrapper[4482]: I1125 06:56:05.980803 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="sbdb" containerID="cri-o://206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640" gracePeriod=30 Nov 25 06:56:05 crc kubenswrapper[4482]: I1125 06:56:05.981200 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="northd" containerID="cri-o://9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418" gracePeriod=30 Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.007587 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" containerID="cri-o://95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d" gracePeriod=30 Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.279643 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/3.log" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.282783 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovn-acl-logging/0.log" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.283356 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovn-controller/0.log" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.283804 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.314052 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b5qtx_2384eec7-0cd1-4bc5-9bc7-b5bb42607c37/kube-multus/2.log" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.314741 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b5qtx_2384eec7-0cd1-4bc5-9bc7-b5bb42607c37/kube-multus/1.log" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.314784 4482 generic.go:334] "Generic (PLEG): container finished" podID="2384eec7-0cd1-4bc5-9bc7-b5bb42607c37" containerID="a912979c2425ba11c5085507bce694e01f44b8a323722e10580037b6644c5083" exitCode=2 Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.314836 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b5qtx" event={"ID":"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37","Type":"ContainerDied","Data":"a912979c2425ba11c5085507bce694e01f44b8a323722e10580037b6644c5083"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.314875 4482 scope.go:117] "RemoveContainer" containerID="898b0c91c20b936343585c30766cafaa8acc830554080c497fe1891d338e4b16" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.315702 4482 scope.go:117] "RemoveContainer" containerID="a912979c2425ba11c5085507bce694e01f44b8a323722e10580037b6644c5083" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.316158 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-b5qtx_openshift-multus(2384eec7-0cd1-4bc5-9bc7-b5bb42607c37)\"" pod="openshift-multus/multus-b5qtx" podUID="2384eec7-0cd1-4bc5-9bc7-b5bb42607c37" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.318052 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovnkube-controller/3.log" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.319646 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovn-acl-logging/0.log" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320047 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c58dr_2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/ovn-controller/0.log" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320418 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d" exitCode=0 Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320438 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640" exitCode=0 Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320448 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b" exitCode=0 Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320455 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418" exitCode=0 Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320462 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120" exitCode=0 Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320468 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974" exitCode=0 Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320475 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388" exitCode=143 Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320482 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerID="7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4" exitCode=143 Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320486 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320502 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320526 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320536 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320545 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320558 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320565 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320575 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320583 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320588 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320594 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320598 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320603 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320608 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320613 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320617 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320621 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320628 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320635 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320642 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320647 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320651 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320656 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320661 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320666 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320671 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320675 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320680 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320686 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320694 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320700 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320705 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320710 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320714 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320720 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320726 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320731 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320736 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320741 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320747 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58dr" event={"ID":"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e","Type":"ContainerDied","Data":"8317eb65a578765ad8e6efac8534606f8308dfb43abd7ed228d49453c4703aab"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320754 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320760 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320765 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320770 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320775 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320780 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320785 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320789 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320794 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.320798 4482 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546"} Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.329981 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z626q"] Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.331138 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="northd" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.331156 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="northd" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.331348 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="nbdb" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.331360 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="nbdb" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.331367 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.331477 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.331486 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.331492 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.331502 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.331507 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.331515 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="kubecfg-setup" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.331522 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="kubecfg-setup" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.331707 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.331716 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.331725 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="kube-rbac-proxy-node" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.331730 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="kube-rbac-proxy-node" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.331740 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovn-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.331746 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovn-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.331753 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovn-acl-logging" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.331758 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovn-acl-logging" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.331943 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="sbdb" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.331953 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="sbdb" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.331977 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.331982 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.333829 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="northd" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.333846 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="nbdb" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.333854 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.333860 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovn-acl-logging" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.333865 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.333870 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.333992 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="kube-rbac-proxy-node" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.333999 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.334005 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.334012 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="sbdb" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.334019 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovn-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.334320 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.334565 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.334764 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" containerName="ovnkube-controller" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.336618 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.344288 4482 scope.go:117] "RemoveContainer" containerID="95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.356027 4482 scope.go:117] "RemoveContainer" containerID="2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365263 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-var-lib-openvswitch\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365304 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-cni-bin\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365328 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-kubelet\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365358 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-env-overrides\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365364 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365387 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-run-netns\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365408 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-log-socket\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365423 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-cni-netd\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365380 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365442 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmswc\" (UniqueName: \"kubernetes.io/projected/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-kube-api-access-cmswc\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365459 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovn-node-metrics-cert\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365487 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovnkube-config\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365520 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-etc-openvswitch\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365542 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-ovn\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365553 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-systemd\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365581 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-openvswitch\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365597 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-run-ovn-kubernetes\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365613 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-systemd-units\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365631 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-node-log\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365647 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-slash\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365664 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365698 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovnkube-script-lib\") pod \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\" (UID: \"2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e\") " Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365877 4482 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365887 4482 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365403 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365423 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365458 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-log-socket" (OuterVolumeSpecName: "log-socket") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365728 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.365988 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.366064 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.366083 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.366101 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.366118 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-slash" (OuterVolumeSpecName: "host-slash") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.366140 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-node-log" (OuterVolumeSpecName: "node-log") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.366158 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.366259 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.366295 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.366315 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.366318 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.370678 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.370842 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-kube-api-access-cmswc" (OuterVolumeSpecName: "kube-api-access-cmswc") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "kube-api-access-cmswc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.376497 4482 scope.go:117] "RemoveContainer" containerID="206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.377669 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" (UID: "2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.389302 4482 scope.go:117] "RemoveContainer" containerID="2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.400204 4482 scope.go:117] "RemoveContainer" containerID="9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.410280 4482 scope.go:117] "RemoveContainer" containerID="7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.420471 4482 scope.go:117] "RemoveContainer" containerID="e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.430443 4482 scope.go:117] "RemoveContainer" containerID="5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.439267 4482 scope.go:117] "RemoveContainer" containerID="7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.452065 4482 scope.go:117] "RemoveContainer" containerID="49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.463880 4482 scope.go:117] "RemoveContainer" containerID="95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.464158 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d\": container with ID starting with 95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d not found: ID does not exist" containerID="95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.464214 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d"} err="failed to get container status \"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d\": rpc error: code = NotFound desc = could not find container \"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d\": container with ID starting with 95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.464232 4482 scope.go:117] "RemoveContainer" containerID="2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.464527 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\": container with ID starting with 2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab not found: ID does not exist" containerID="2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.464550 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab"} err="failed to get container status \"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\": rpc error: code = NotFound desc = could not find container \"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\": container with ID starting with 2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.464564 4482 scope.go:117] "RemoveContainer" containerID="206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.464868 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\": container with ID starting with 206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640 not found: ID does not exist" containerID="206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.464886 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640"} err="failed to get container status \"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\": rpc error: code = NotFound desc = could not find container \"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\": container with ID starting with 206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.464899 4482 scope.go:117] "RemoveContainer" containerID="2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.465198 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\": container with ID starting with 2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b not found: ID does not exist" containerID="2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.465220 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b"} err="failed to get container status \"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\": rpc error: code = NotFound desc = could not find container \"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\": container with ID starting with 2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.465232 4482 scope.go:117] "RemoveContainer" containerID="9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.465530 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\": container with ID starting with 9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418 not found: ID does not exist" containerID="9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.465547 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418"} err="failed to get container status \"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\": rpc error: code = NotFound desc = could not find container \"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\": container with ID starting with 9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.465558 4482 scope.go:117] "RemoveContainer" containerID="7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.465814 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\": container with ID starting with 7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120 not found: ID does not exist" containerID="7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.465832 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120"} err="failed to get container status \"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\": rpc error: code = NotFound desc = could not find container \"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\": container with ID starting with 7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.465843 4482 scope.go:117] "RemoveContainer" containerID="e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.466116 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\": container with ID starting with e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974 not found: ID does not exist" containerID="e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466132 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974"} err="failed to get container status \"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\": rpc error: code = NotFound desc = could not find container \"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\": container with ID starting with e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466144 4482 scope.go:117] "RemoveContainer" containerID="5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.466395 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\": container with ID starting with 5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388 not found: ID does not exist" containerID="5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466411 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388"} err="failed to get container status \"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\": rpc error: code = NotFound desc = could not find container \"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\": container with ID starting with 5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466423 4482 scope.go:117] "RemoveContainer" containerID="7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466551 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-etc-openvswitch\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466580 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-run-openvswitch\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466605 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-log-socket\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466620 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-ovn-node-metrics-cert\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466647 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-cni-netd\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466660 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-var-lib-openvswitch\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.466691 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\": container with ID starting with 7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4 not found: ID does not exist" containerID="7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466705 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4"} err="failed to get container status \"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\": rpc error: code = NotFound desc = could not find container \"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\": container with ID starting with 7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466718 4482 scope.go:117] "RemoveContainer" containerID="49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466707 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-run-netns\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466798 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-ovnkube-config\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466816 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-env-overrides\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466831 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-slash\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466859 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-run-ovn-kubernetes\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466885 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-run-ovn\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466902 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-cni-bin\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466931 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466949 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-ovnkube-script-lib\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.466985 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-systemd-units\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467004 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-node-log\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467031 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-kubelet\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467052 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nlsh\" (UniqueName: \"kubernetes.io/projected/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-kube-api-access-7nlsh\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467067 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-run-systemd\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467289 4482 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467307 4482 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467317 4482 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467325 4482 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-log-socket\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467333 4482 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467341 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmswc\" (UniqueName: \"kubernetes.io/projected/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-kube-api-access-cmswc\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467349 4482 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467356 4482 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467367 4482 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467375 4482 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467382 4482 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467391 4482 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467400 4482 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467408 4482 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467415 4482 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-node-log\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467422 4482 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-slash\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467431 4482 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467440 4482 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:06 crc kubenswrapper[4482]: E1125 06:56:06.467524 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\": container with ID starting with 49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546 not found: ID does not exist" containerID="49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467540 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546"} err="failed to get container status \"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\": rpc error: code = NotFound desc = could not find container \"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\": container with ID starting with 49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467551 4482 scope.go:117] "RemoveContainer" containerID="95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467789 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d"} err="failed to get container status \"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d\": rpc error: code = NotFound desc = could not find container \"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d\": container with ID starting with 95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.467812 4482 scope.go:117] "RemoveContainer" containerID="2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.468055 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab"} err="failed to get container status \"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\": rpc error: code = NotFound desc = could not find container \"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\": container with ID starting with 2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.468074 4482 scope.go:117] "RemoveContainer" containerID="206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.468377 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640"} err="failed to get container status \"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\": rpc error: code = NotFound desc = could not find container \"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\": container with ID starting with 206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.468396 4482 scope.go:117] "RemoveContainer" containerID="2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.468626 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b"} err="failed to get container status \"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\": rpc error: code = NotFound desc = could not find container \"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\": container with ID starting with 2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.468646 4482 scope.go:117] "RemoveContainer" containerID="9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.468879 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418"} err="failed to get container status \"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\": rpc error: code = NotFound desc = could not find container \"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\": container with ID starting with 9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.468898 4482 scope.go:117] "RemoveContainer" containerID="7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.469162 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120"} err="failed to get container status \"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\": rpc error: code = NotFound desc = could not find container \"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\": container with ID starting with 7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.469195 4482 scope.go:117] "RemoveContainer" containerID="e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.469510 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974"} err="failed to get container status \"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\": rpc error: code = NotFound desc = could not find container \"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\": container with ID starting with e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.469530 4482 scope.go:117] "RemoveContainer" containerID="5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.469769 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388"} err="failed to get container status \"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\": rpc error: code = NotFound desc = could not find container \"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\": container with ID starting with 5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.469786 4482 scope.go:117] "RemoveContainer" containerID="7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.470082 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4"} err="failed to get container status \"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\": rpc error: code = NotFound desc = could not find container \"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\": container with ID starting with 7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.470098 4482 scope.go:117] "RemoveContainer" containerID="49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.470408 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546"} err="failed to get container status \"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\": rpc error: code = NotFound desc = could not find container \"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\": container with ID starting with 49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.470428 4482 scope.go:117] "RemoveContainer" containerID="95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.470647 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d"} err="failed to get container status \"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d\": rpc error: code = NotFound desc = could not find container \"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d\": container with ID starting with 95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.470664 4482 scope.go:117] "RemoveContainer" containerID="2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.470887 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab"} err="failed to get container status \"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\": rpc error: code = NotFound desc = could not find container \"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\": container with ID starting with 2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.470903 4482 scope.go:117] "RemoveContainer" containerID="206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.471209 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640"} err="failed to get container status \"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\": rpc error: code = NotFound desc = could not find container \"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\": container with ID starting with 206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.471228 4482 scope.go:117] "RemoveContainer" containerID="2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.471834 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b"} err="failed to get container status \"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\": rpc error: code = NotFound desc = could not find container \"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\": container with ID starting with 2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.471861 4482 scope.go:117] "RemoveContainer" containerID="9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.472074 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418"} err="failed to get container status \"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\": rpc error: code = NotFound desc = could not find container \"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\": container with ID starting with 9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.472092 4482 scope.go:117] "RemoveContainer" containerID="7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.472267 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120"} err="failed to get container status \"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\": rpc error: code = NotFound desc = could not find container \"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\": container with ID starting with 7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.472283 4482 scope.go:117] "RemoveContainer" containerID="e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.472562 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974"} err="failed to get container status \"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\": rpc error: code = NotFound desc = could not find container \"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\": container with ID starting with e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.472581 4482 scope.go:117] "RemoveContainer" containerID="5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.472776 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388"} err="failed to get container status \"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\": rpc error: code = NotFound desc = could not find container \"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\": container with ID starting with 5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.472795 4482 scope.go:117] "RemoveContainer" containerID="7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.472994 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4"} err="failed to get container status \"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\": rpc error: code = NotFound desc = could not find container \"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\": container with ID starting with 7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.473010 4482 scope.go:117] "RemoveContainer" containerID="49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.473236 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546"} err="failed to get container status \"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\": rpc error: code = NotFound desc = could not find container \"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\": container with ID starting with 49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.473255 4482 scope.go:117] "RemoveContainer" containerID="95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.473616 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d"} err="failed to get container status \"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d\": rpc error: code = NotFound desc = could not find container \"95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d\": container with ID starting with 95d262081f02f858bfd960d4d596b9c842b82aa50d9528f00ca04ab7174f4d5d not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.473644 4482 scope.go:117] "RemoveContainer" containerID="2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.473936 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab"} err="failed to get container status \"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\": rpc error: code = NotFound desc = could not find container \"2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab\": container with ID starting with 2ce05e1398cb71abe31e212993f8a2f2f3665285b27b727374cb327c930720ab not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.473963 4482 scope.go:117] "RemoveContainer" containerID="206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.474377 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640"} err="failed to get container status \"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\": rpc error: code = NotFound desc = could not find container \"206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640\": container with ID starting with 206323b326d7fb6621e87be0f39f16c887688791836563c42e251a1f22578640 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.474398 4482 scope.go:117] "RemoveContainer" containerID="2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.474656 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b"} err="failed to get container status \"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\": rpc error: code = NotFound desc = could not find container \"2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b\": container with ID starting with 2b27a32aedb1c8e02b7e22204921ff84b8fa44e78989e0ad87cd58fcd0e7ae4b not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.474674 4482 scope.go:117] "RemoveContainer" containerID="9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.474987 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418"} err="failed to get container status \"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\": rpc error: code = NotFound desc = could not find container \"9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418\": container with ID starting with 9401c8b0bd3469e7650098ae49776e27cceea67a6cfa7e991f3122cb0dcfa418 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.475005 4482 scope.go:117] "RemoveContainer" containerID="7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.475259 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120"} err="failed to get container status \"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\": rpc error: code = NotFound desc = could not find container \"7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120\": container with ID starting with 7fb384b5a9a127de7c58c28aa4fb3e3375477c527f380415255b88e68996c120 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.475279 4482 scope.go:117] "RemoveContainer" containerID="e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.475578 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974"} err="failed to get container status \"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\": rpc error: code = NotFound desc = could not find container \"e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974\": container with ID starting with e00167e8d06abe222c21d1d4b62b78de6992dfb8f794148a3a6c8d998df74974 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.475595 4482 scope.go:117] "RemoveContainer" containerID="5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.475818 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388"} err="failed to get container status \"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\": rpc error: code = NotFound desc = could not find container \"5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388\": container with ID starting with 5f991b775e5b62a1b7ea4a111ad6812c503976aa6888f63e905bf6f8b36ea388 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.475836 4482 scope.go:117] "RemoveContainer" containerID="7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.476142 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4"} err="failed to get container status \"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\": rpc error: code = NotFound desc = could not find container \"7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4\": container with ID starting with 7a08313edb32e889445af82fd7b2cc06870dbe2522adbbd3c8b3a2714b7e4de4 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.476161 4482 scope.go:117] "RemoveContainer" containerID="49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.476496 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546"} err="failed to get container status \"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\": rpc error: code = NotFound desc = could not find container \"49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546\": container with ID starting with 49e747d3c6882b5aebec0ba0f81012d89846498c381635f6d75ecfc4222f6546 not found: ID does not exist" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568071 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-run-netns\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568126 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-run-netns\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568152 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-ovnkube-config\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568223 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-env-overrides\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568243 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-slash\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568262 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-run-ovn-kubernetes\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568311 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-run-ovn\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568327 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-cni-bin\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568413 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-run-ovn\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568468 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568480 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-cni-bin\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568511 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-run-ovn-kubernetes\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568496 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-ovnkube-script-lib\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568550 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568569 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-systemd-units\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568610 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-node-log\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568638 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-kubelet\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568673 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-systemd-units\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568684 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nlsh\" (UniqueName: \"kubernetes.io/projected/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-kube-api-access-7nlsh\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568792 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-run-systemd\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568834 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-etc-openvswitch\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568853 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-run-openvswitch\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568724 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-kubelet\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568865 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-run-systemd\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568875 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-log-socket\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568895 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-log-socket\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568899 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-etc-openvswitch\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568911 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-cni-netd\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568924 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-run-openvswitch\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568932 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-ovn-node-metrics-cert\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568951 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-var-lib-openvswitch\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568980 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-slash\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568934 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-host-cni-netd\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.568712 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-node-log\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.569030 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-var-lib-openvswitch\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.569344 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-env-overrides\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.569568 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-ovnkube-script-lib\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.569679 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-ovnkube-config\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.572702 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-ovn-node-metrics-cert\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.581837 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nlsh\" (UniqueName: \"kubernetes.io/projected/8ca21f40-2c0a-4363-bf5a-dffe0cea773e-kube-api-access-7nlsh\") pod \"ovnkube-node-z626q\" (UID: \"8ca21f40-2c0a-4363-bf5a-dffe0cea773e\") " pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.643023 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c58dr"] Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.647103 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c58dr"] Nov 25 06:56:06 crc kubenswrapper[4482]: I1125 06:56:06.647469 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:06 crc kubenswrapper[4482]: W1125 06:56:06.665131 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ca21f40_2c0a_4363_bf5a_dffe0cea773e.slice/crio-5c62ac8ac2a1d2bc4f7b226ca539a1a69cf41a099173e0b19efef0d1ecf2a532 WatchSource:0}: Error finding container 5c62ac8ac2a1d2bc4f7b226ca539a1a69cf41a099173e0b19efef0d1ecf2a532: Status 404 returned error can't find the container with id 5c62ac8ac2a1d2bc4f7b226ca539a1a69cf41a099173e0b19efef0d1ecf2a532 Nov 25 06:56:07 crc kubenswrapper[4482]: I1125 06:56:07.327074 4482 generic.go:334] "Generic (PLEG): container finished" podID="8ca21f40-2c0a-4363-bf5a-dffe0cea773e" containerID="f1f3bf3e19b97bdc86dc0057ba005b3d56e3f56d74d5c0d4d29b36c41c311e3d" exitCode=0 Nov 25 06:56:07 crc kubenswrapper[4482]: I1125 06:56:07.327146 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" event={"ID":"8ca21f40-2c0a-4363-bf5a-dffe0cea773e","Type":"ContainerDied","Data":"f1f3bf3e19b97bdc86dc0057ba005b3d56e3f56d74d5c0d4d29b36c41c311e3d"} Nov 25 06:56:07 crc kubenswrapper[4482]: I1125 06:56:07.327215 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" event={"ID":"8ca21f40-2c0a-4363-bf5a-dffe0cea773e","Type":"ContainerStarted","Data":"5c62ac8ac2a1d2bc4f7b226ca539a1a69cf41a099173e0b19efef0d1ecf2a532"} Nov 25 06:56:07 crc kubenswrapper[4482]: I1125 06:56:07.330552 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b5qtx_2384eec7-0cd1-4bc5-9bc7-b5bb42607c37/kube-multus/2.log" Nov 25 06:56:07 crc kubenswrapper[4482]: I1125 06:56:07.836926 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e" path="/var/lib/kubelet/pods/2ee3c4ba-b1ee-4c31-8b39-8ed3d9e3945e/volumes" Nov 25 06:56:08 crc kubenswrapper[4482]: I1125 06:56:08.339523 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" event={"ID":"8ca21f40-2c0a-4363-bf5a-dffe0cea773e","Type":"ContainerStarted","Data":"26b00583fc9b3cc88f4744f24e3c6b1ea2111830d97b3fb4065fc0fcdeb91652"} Nov 25 06:56:08 crc kubenswrapper[4482]: I1125 06:56:08.339567 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" event={"ID":"8ca21f40-2c0a-4363-bf5a-dffe0cea773e","Type":"ContainerStarted","Data":"18f060e9ea9089110a1669ed64237f9f452803d7b8100f5d566d5214a5c79575"} Nov 25 06:56:08 crc kubenswrapper[4482]: I1125 06:56:08.339581 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" event={"ID":"8ca21f40-2c0a-4363-bf5a-dffe0cea773e","Type":"ContainerStarted","Data":"c8bed2a02310b02198c6c9dcf38ab20b12f4a60a8cdcec1160dfc4b40c63681e"} Nov 25 06:56:08 crc kubenswrapper[4482]: I1125 06:56:08.339589 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" event={"ID":"8ca21f40-2c0a-4363-bf5a-dffe0cea773e","Type":"ContainerStarted","Data":"72666de3a8d59d009bac2daaa7264878a7b2b6bc11944a69127b913fe894d1f9"} Nov 25 06:56:08 crc kubenswrapper[4482]: I1125 06:56:08.339595 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" event={"ID":"8ca21f40-2c0a-4363-bf5a-dffe0cea773e","Type":"ContainerStarted","Data":"19f3e205033c55c02b79a5ad1b4419fe0a93840f0e26e9348d7b22658c3a3f6c"} Nov 25 06:56:08 crc kubenswrapper[4482]: I1125 06:56:08.339603 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" event={"ID":"8ca21f40-2c0a-4363-bf5a-dffe0cea773e","Type":"ContainerStarted","Data":"902d11c7a656a0de950b41b726eb43d06b7f9f10a90db57aa8ea036396a4b668"} Nov 25 06:56:09 crc kubenswrapper[4482]: I1125 06:56:09.117521 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 06:56:09 crc kubenswrapper[4482]: I1125 06:56:09.117613 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 06:56:10 crc kubenswrapper[4482]: I1125 06:56:10.353565 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" event={"ID":"8ca21f40-2c0a-4363-bf5a-dffe0cea773e","Type":"ContainerStarted","Data":"bcd95d11800a667205afdf12825f2ad73bc0f9e7a227684a4fb57a6158fdfc43"} Nov 25 06:56:12 crc kubenswrapper[4482]: I1125 06:56:12.363786 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" event={"ID":"8ca21f40-2c0a-4363-bf5a-dffe0cea773e","Type":"ContainerStarted","Data":"090d76ad132c342cf496ebdbd312f22e2c83da5a618368254bc62a3580eb4c92"} Nov 25 06:56:12 crc kubenswrapper[4482]: I1125 06:56:12.364128 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:12 crc kubenswrapper[4482]: I1125 06:56:12.364140 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:12 crc kubenswrapper[4482]: I1125 06:56:12.364149 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:12 crc kubenswrapper[4482]: I1125 06:56:12.388477 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" podStartSLOduration=6.388464782 podStartE2EDuration="6.388464782s" podCreationTimestamp="2025-11-25 06:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:56:12.385963706 +0000 UTC m=+546.874194965" watchObservedRunningTime="2025-11-25 06:56:12.388464782 +0000 UTC m=+546.876696042" Nov 25 06:56:12 crc kubenswrapper[4482]: I1125 06:56:12.391102 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:12 crc kubenswrapper[4482]: I1125 06:56:12.393926 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:17 crc kubenswrapper[4482]: I1125 06:56:17.831294 4482 scope.go:117] "RemoveContainer" containerID="a912979c2425ba11c5085507bce694e01f44b8a323722e10580037b6644c5083" Nov 25 06:56:17 crc kubenswrapper[4482]: E1125 06:56:17.831944 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-b5qtx_openshift-multus(2384eec7-0cd1-4bc5-9bc7-b5bb42607c37)\"" pod="openshift-multus/multus-b5qtx" podUID="2384eec7-0cd1-4bc5-9bc7-b5bb42607c37" Nov 25 06:56:28 crc kubenswrapper[4482]: I1125 06:56:28.831112 4482 scope.go:117] "RemoveContainer" containerID="a912979c2425ba11c5085507bce694e01f44b8a323722e10580037b6644c5083" Nov 25 06:56:29 crc kubenswrapper[4482]: I1125 06:56:29.430480 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b5qtx_2384eec7-0cd1-4bc5-9bc7-b5bb42607c37/kube-multus/2.log" Nov 25 06:56:29 crc kubenswrapper[4482]: I1125 06:56:29.430702 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b5qtx" event={"ID":"2384eec7-0cd1-4bc5-9bc7-b5bb42607c37","Type":"ContainerStarted","Data":"dbc3e37c063ae0ca255074586f8d79ecbcefed442aa7d27e14581b9f94a67471"} Nov 25 06:56:36 crc kubenswrapper[4482]: I1125 06:56:36.675075 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z626q" Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.626535 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd"] Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.627540 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.631671 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.637868 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd"] Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.815472 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlgp4\" (UniqueName: \"kubernetes.io/projected/567aab34-663e-4100-84f5-99bda36c5ec9-kube-api-access-mlgp4\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd\" (UID: \"567aab34-663e-4100-84f5-99bda36c5ec9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.815578 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/567aab34-663e-4100-84f5-99bda36c5ec9-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd\" (UID: \"567aab34-663e-4100-84f5-99bda36c5ec9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.815645 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/567aab34-663e-4100-84f5-99bda36c5ec9-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd\" (UID: \"567aab34-663e-4100-84f5-99bda36c5ec9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.916836 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/567aab34-663e-4100-84f5-99bda36c5ec9-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd\" (UID: \"567aab34-663e-4100-84f5-99bda36c5ec9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.916908 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/567aab34-663e-4100-84f5-99bda36c5ec9-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd\" (UID: \"567aab34-663e-4100-84f5-99bda36c5ec9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.916947 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlgp4\" (UniqueName: \"kubernetes.io/projected/567aab34-663e-4100-84f5-99bda36c5ec9-kube-api-access-mlgp4\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd\" (UID: \"567aab34-663e-4100-84f5-99bda36c5ec9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.917519 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/567aab34-663e-4100-84f5-99bda36c5ec9-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd\" (UID: \"567aab34-663e-4100-84f5-99bda36c5ec9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.917655 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/567aab34-663e-4100-84f5-99bda36c5ec9-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd\" (UID: \"567aab34-663e-4100-84f5-99bda36c5ec9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.935680 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlgp4\" (UniqueName: \"kubernetes.io/projected/567aab34-663e-4100-84f5-99bda36c5ec9-kube-api-access-mlgp4\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd\" (UID: \"567aab34-663e-4100-84f5-99bda36c5ec9\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:37 crc kubenswrapper[4482]: I1125 06:56:37.941551 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:38 crc kubenswrapper[4482]: I1125 06:56:38.087126 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd"] Nov 25 06:56:38 crc kubenswrapper[4482]: I1125 06:56:38.466735 4482 generic.go:334] "Generic (PLEG): container finished" podID="567aab34-663e-4100-84f5-99bda36c5ec9" containerID="0b12d5acad4ee2f6dc56d487cb39a8a87596118b46318fbff7cb45e16b992dbf" exitCode=0 Nov 25 06:56:38 crc kubenswrapper[4482]: I1125 06:56:38.466785 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" event={"ID":"567aab34-663e-4100-84f5-99bda36c5ec9","Type":"ContainerDied","Data":"0b12d5acad4ee2f6dc56d487cb39a8a87596118b46318fbff7cb45e16b992dbf"} Nov 25 06:56:38 crc kubenswrapper[4482]: I1125 06:56:38.466817 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" event={"ID":"567aab34-663e-4100-84f5-99bda36c5ec9","Type":"ContainerStarted","Data":"a48cbcaf7d58fab1dacf3fc3ccec99c051c31247d238f0ab15e2cef51b631308"} Nov 25 06:56:39 crc kubenswrapper[4482]: I1125 06:56:39.117678 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 06:56:39 crc kubenswrapper[4482]: I1125 06:56:39.117743 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 06:56:39 crc kubenswrapper[4482]: I1125 06:56:39.117790 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:56:39 crc kubenswrapper[4482]: I1125 06:56:39.118245 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d84812a555ffdedafcf55f0c474a9703c65d1fb93d154179be65ddf6b69c96ac"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 06:56:39 crc kubenswrapper[4482]: I1125 06:56:39.118309 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://d84812a555ffdedafcf55f0c474a9703c65d1fb93d154179be65ddf6b69c96ac" gracePeriod=600 Nov 25 06:56:39 crc kubenswrapper[4482]: I1125 06:56:39.473159 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="d84812a555ffdedafcf55f0c474a9703c65d1fb93d154179be65ddf6b69c96ac" exitCode=0 Nov 25 06:56:39 crc kubenswrapper[4482]: I1125 06:56:39.473286 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"d84812a555ffdedafcf55f0c474a9703c65d1fb93d154179be65ddf6b69c96ac"} Nov 25 06:56:39 crc kubenswrapper[4482]: I1125 06:56:39.473416 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"18fd7402468da26f930d0a283cd4f3dcbe4ac307cf8525f069560121b3739a9f"} Nov 25 06:56:39 crc kubenswrapper[4482]: I1125 06:56:39.473437 4482 scope.go:117] "RemoveContainer" containerID="b9556eecd99aaa627f2f8338b1f2e2766518897cc04a75034690120a70e07dff" Nov 25 06:56:40 crc kubenswrapper[4482]: I1125 06:56:40.482842 4482 generic.go:334] "Generic (PLEG): container finished" podID="567aab34-663e-4100-84f5-99bda36c5ec9" containerID="fbb1af761b457d20661bc822899b91072177f627312c138198cefe92498daa78" exitCode=0 Nov 25 06:56:40 crc kubenswrapper[4482]: I1125 06:56:40.482904 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" event={"ID":"567aab34-663e-4100-84f5-99bda36c5ec9","Type":"ContainerDied","Data":"fbb1af761b457d20661bc822899b91072177f627312c138198cefe92498daa78"} Nov 25 06:56:41 crc kubenswrapper[4482]: I1125 06:56:41.491584 4482 generic.go:334] "Generic (PLEG): container finished" podID="567aab34-663e-4100-84f5-99bda36c5ec9" containerID="e5682546a1a8319c026e0b4a5c809fe63ea5e44d37f8cb6b2ac29ded270a5452" exitCode=0 Nov 25 06:56:41 crc kubenswrapper[4482]: I1125 06:56:41.491668 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" event={"ID":"567aab34-663e-4100-84f5-99bda36c5ec9","Type":"ContainerDied","Data":"e5682546a1a8319c026e0b4a5c809fe63ea5e44d37f8cb6b2ac29ded270a5452"} Nov 25 06:56:42 crc kubenswrapper[4482]: I1125 06:56:42.668698 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:42 crc kubenswrapper[4482]: I1125 06:56:42.673055 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/567aab34-663e-4100-84f5-99bda36c5ec9-util\") pod \"567aab34-663e-4100-84f5-99bda36c5ec9\" (UID: \"567aab34-663e-4100-84f5-99bda36c5ec9\") " Nov 25 06:56:42 crc kubenswrapper[4482]: I1125 06:56:42.673130 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/567aab34-663e-4100-84f5-99bda36c5ec9-bundle\") pod \"567aab34-663e-4100-84f5-99bda36c5ec9\" (UID: \"567aab34-663e-4100-84f5-99bda36c5ec9\") " Nov 25 06:56:42 crc kubenswrapper[4482]: I1125 06:56:42.673155 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlgp4\" (UniqueName: \"kubernetes.io/projected/567aab34-663e-4100-84f5-99bda36c5ec9-kube-api-access-mlgp4\") pod \"567aab34-663e-4100-84f5-99bda36c5ec9\" (UID: \"567aab34-663e-4100-84f5-99bda36c5ec9\") " Nov 25 06:56:42 crc kubenswrapper[4482]: I1125 06:56:42.673685 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567aab34-663e-4100-84f5-99bda36c5ec9-bundle" (OuterVolumeSpecName: "bundle") pod "567aab34-663e-4100-84f5-99bda36c5ec9" (UID: "567aab34-663e-4100-84f5-99bda36c5ec9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:56:42 crc kubenswrapper[4482]: I1125 06:56:42.677735 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567aab34-663e-4100-84f5-99bda36c5ec9-kube-api-access-mlgp4" (OuterVolumeSpecName: "kube-api-access-mlgp4") pod "567aab34-663e-4100-84f5-99bda36c5ec9" (UID: "567aab34-663e-4100-84f5-99bda36c5ec9"). InnerVolumeSpecName "kube-api-access-mlgp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:56:42 crc kubenswrapper[4482]: I1125 06:56:42.683749 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567aab34-663e-4100-84f5-99bda36c5ec9-util" (OuterVolumeSpecName: "util") pod "567aab34-663e-4100-84f5-99bda36c5ec9" (UID: "567aab34-663e-4100-84f5-99bda36c5ec9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:56:42 crc kubenswrapper[4482]: I1125 06:56:42.774340 4482 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/567aab34-663e-4100-84f5-99bda36c5ec9-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:42 crc kubenswrapper[4482]: I1125 06:56:42.774378 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlgp4\" (UniqueName: \"kubernetes.io/projected/567aab34-663e-4100-84f5-99bda36c5ec9-kube-api-access-mlgp4\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:42 crc kubenswrapper[4482]: I1125 06:56:42.774391 4482 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/567aab34-663e-4100-84f5-99bda36c5ec9-util\") on node \"crc\" DevicePath \"\"" Nov 25 06:56:43 crc kubenswrapper[4482]: I1125 06:56:43.504433 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" event={"ID":"567aab34-663e-4100-84f5-99bda36c5ec9","Type":"ContainerDied","Data":"a48cbcaf7d58fab1dacf3fc3ccec99c051c31247d238f0ab15e2cef51b631308"} Nov 25 06:56:43 crc kubenswrapper[4482]: I1125 06:56:43.504474 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a48cbcaf7d58fab1dacf3fc3ccec99c051c31247d238f0ab15e2cef51b631308" Nov 25 06:56:43 crc kubenswrapper[4482]: I1125 06:56:43.504475 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erdnnd" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.493620 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-rp4tg"] Nov 25 06:56:45 crc kubenswrapper[4482]: E1125 06:56:45.494141 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567aab34-663e-4100-84f5-99bda36c5ec9" containerName="extract" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.494156 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="567aab34-663e-4100-84f5-99bda36c5ec9" containerName="extract" Nov 25 06:56:45 crc kubenswrapper[4482]: E1125 06:56:45.494187 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567aab34-663e-4100-84f5-99bda36c5ec9" containerName="pull" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.494194 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="567aab34-663e-4100-84f5-99bda36c5ec9" containerName="pull" Nov 25 06:56:45 crc kubenswrapper[4482]: E1125 06:56:45.494208 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567aab34-663e-4100-84f5-99bda36c5ec9" containerName="util" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.494213 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="567aab34-663e-4100-84f5-99bda36c5ec9" containerName="util" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.494325 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="567aab34-663e-4100-84f5-99bda36c5ec9" containerName="extract" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.494738 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-rp4tg" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.496651 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.496692 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.496875 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-wktl9" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.500435 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v795\" (UniqueName: \"kubernetes.io/projected/0432409e-46b6-4f45-9855-7958989f6f74-kube-api-access-4v795\") pod \"nmstate-operator-557fdffb88-rp4tg\" (UID: \"0432409e-46b6-4f45-9855-7958989f6f74\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-rp4tg" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.517987 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-rp4tg"] Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.601781 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v795\" (UniqueName: \"kubernetes.io/projected/0432409e-46b6-4f45-9855-7958989f6f74-kube-api-access-4v795\") pod \"nmstate-operator-557fdffb88-rp4tg\" (UID: \"0432409e-46b6-4f45-9855-7958989f6f74\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-rp4tg" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.618920 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v795\" (UniqueName: \"kubernetes.io/projected/0432409e-46b6-4f45-9855-7958989f6f74-kube-api-access-4v795\") pod \"nmstate-operator-557fdffb88-rp4tg\" (UID: \"0432409e-46b6-4f45-9855-7958989f6f74\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-rp4tg" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.814862 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-rp4tg" Nov 25 06:56:45 crc kubenswrapper[4482]: I1125 06:56:45.991412 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-rp4tg"] Nov 25 06:56:46 crc kubenswrapper[4482]: I1125 06:56:46.523724 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-rp4tg" event={"ID":"0432409e-46b6-4f45-9855-7958989f6f74","Type":"ContainerStarted","Data":"a84177c12fb47f05a9641d957defbb488d44ea40b69ca305bcd4b6a031bde8f3"} Nov 25 06:56:48 crc kubenswrapper[4482]: I1125 06:56:48.537446 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-rp4tg" event={"ID":"0432409e-46b6-4f45-9855-7958989f6f74","Type":"ContainerStarted","Data":"c46e0e2131de1a2bb474b7c0e9a119c11df449f10be86769dede536da53fb5ce"} Nov 25 06:56:48 crc kubenswrapper[4482]: I1125 06:56:48.556626 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-rp4tg" podStartSLOduration=1.282314266 podStartE2EDuration="3.556597113s" podCreationTimestamp="2025-11-25 06:56:45 +0000 UTC" firstStartedPulling="2025-11-25 06:56:46.00357599 +0000 UTC m=+580.491807249" lastFinishedPulling="2025-11-25 06:56:48.277858837 +0000 UTC m=+582.766090096" observedRunningTime="2025-11-25 06:56:48.551368554 +0000 UTC m=+583.039599812" watchObservedRunningTime="2025-11-25 06:56:48.556597113 +0000 UTC m=+583.044828363" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.373982 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-bk5ck"] Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.375215 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-bk5ck" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.377509 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-84zf2" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.382937 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9"] Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.383607 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.385981 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.417702 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-fjscb"] Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.418431 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.423948 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9"] Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.448941 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-bk5ck"] Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.512376 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc"] Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.513127 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" Nov 25 06:56:49 crc kubenswrapper[4482]: W1125 06:56:49.516260 4482 reflector.go:561] object-"openshift-nmstate"/"default-dockercfg-tb7vq": failed to list *v1.Secret: secrets "default-dockercfg-tb7vq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Nov 25 06:56:49 crc kubenswrapper[4482]: E1125 06:56:49.516311 4482 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"default-dockercfg-tb7vq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-tb7vq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 06:56:49 crc kubenswrapper[4482]: W1125 06:56:49.516971 4482 reflector.go:561] object-"openshift-nmstate"/"nginx-conf": failed to list *v1.ConfigMap: configmaps "nginx-conf" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Nov 25 06:56:49 crc kubenswrapper[4482]: E1125 06:56:49.516998 4482 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nginx-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"nginx-conf\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 06:56:49 crc kubenswrapper[4482]: W1125 06:56:49.518509 4482 reflector.go:561] object-"openshift-nmstate"/"plugin-serving-cert": failed to list *v1.Secret: secrets "plugin-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Nov 25 06:56:49 crc kubenswrapper[4482]: E1125 06:56:49.518541 4482 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"plugin-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"plugin-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.541324 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc"] Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.554240 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/98418653-34b1-4992-97a1-ccd79bbaff55-ovs-socket\") pod \"nmstate-handler-fjscb\" (UID: \"98418653-34b1-4992-97a1-ccd79bbaff55\") " pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.554316 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cdbc8692-6df7-4640-8c66-a50e3df8b9d2-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-7w2b9\" (UID: \"cdbc8692-6df7-4640-8c66-a50e3df8b9d2\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.554454 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b53580c1-0cca-4b9a-b478-0fd7e888f00e-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-q7tqc\" (UID: \"b53580c1-0cca-4b9a-b478-0fd7e888f00e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.554579 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v966f\" (UniqueName: \"kubernetes.io/projected/98418653-34b1-4992-97a1-ccd79bbaff55-kube-api-access-v966f\") pod \"nmstate-handler-fjscb\" (UID: \"98418653-34b1-4992-97a1-ccd79bbaff55\") " pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.554622 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx6xz\" (UniqueName: \"kubernetes.io/projected/b53580c1-0cca-4b9a-b478-0fd7e888f00e-kube-api-access-wx6xz\") pod \"nmstate-console-plugin-5874bd7bc5-q7tqc\" (UID: \"b53580c1-0cca-4b9a-b478-0fd7e888f00e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.554719 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b53580c1-0cca-4b9a-b478-0fd7e888f00e-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-q7tqc\" (UID: \"b53580c1-0cca-4b9a-b478-0fd7e888f00e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.554762 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcpln\" (UniqueName: \"kubernetes.io/projected/cdbc8692-6df7-4640-8c66-a50e3df8b9d2-kube-api-access-xcpln\") pod \"nmstate-webhook-6b89b748d8-7w2b9\" (UID: \"cdbc8692-6df7-4640-8c66-a50e3df8b9d2\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.554795 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/98418653-34b1-4992-97a1-ccd79bbaff55-dbus-socket\") pod \"nmstate-handler-fjscb\" (UID: \"98418653-34b1-4992-97a1-ccd79bbaff55\") " pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.554841 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6xzr\" (UniqueName: \"kubernetes.io/projected/6d248756-a13d-460a-a7d9-a64a0ca71baa-kube-api-access-f6xzr\") pod \"nmstate-metrics-5dcf9c57c5-bk5ck\" (UID: \"6d248756-a13d-460a-a7d9-a64a0ca71baa\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-bk5ck" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.554875 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/98418653-34b1-4992-97a1-ccd79bbaff55-nmstate-lock\") pod \"nmstate-handler-fjscb\" (UID: \"98418653-34b1-4992-97a1-ccd79bbaff55\") " pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.655565 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b53580c1-0cca-4b9a-b478-0fd7e888f00e-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-q7tqc\" (UID: \"b53580c1-0cca-4b9a-b478-0fd7e888f00e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.655613 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcpln\" (UniqueName: \"kubernetes.io/projected/cdbc8692-6df7-4640-8c66-a50e3df8b9d2-kube-api-access-xcpln\") pod \"nmstate-webhook-6b89b748d8-7w2b9\" (UID: \"cdbc8692-6df7-4640-8c66-a50e3df8b9d2\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.655637 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/98418653-34b1-4992-97a1-ccd79bbaff55-dbus-socket\") pod \"nmstate-handler-fjscb\" (UID: \"98418653-34b1-4992-97a1-ccd79bbaff55\") " pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.655663 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6xzr\" (UniqueName: \"kubernetes.io/projected/6d248756-a13d-460a-a7d9-a64a0ca71baa-kube-api-access-f6xzr\") pod \"nmstate-metrics-5dcf9c57c5-bk5ck\" (UID: \"6d248756-a13d-460a-a7d9-a64a0ca71baa\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-bk5ck" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.655681 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/98418653-34b1-4992-97a1-ccd79bbaff55-nmstate-lock\") pod \"nmstate-handler-fjscb\" (UID: \"98418653-34b1-4992-97a1-ccd79bbaff55\") " pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.655704 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/98418653-34b1-4992-97a1-ccd79bbaff55-ovs-socket\") pod \"nmstate-handler-fjscb\" (UID: \"98418653-34b1-4992-97a1-ccd79bbaff55\") " pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.655724 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cdbc8692-6df7-4640-8c66-a50e3df8b9d2-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-7w2b9\" (UID: \"cdbc8692-6df7-4640-8c66-a50e3df8b9d2\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.655767 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b53580c1-0cca-4b9a-b478-0fd7e888f00e-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-q7tqc\" (UID: \"b53580c1-0cca-4b9a-b478-0fd7e888f00e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.655811 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v966f\" (UniqueName: \"kubernetes.io/projected/98418653-34b1-4992-97a1-ccd79bbaff55-kube-api-access-v966f\") pod \"nmstate-handler-fjscb\" (UID: \"98418653-34b1-4992-97a1-ccd79bbaff55\") " pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.655825 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx6xz\" (UniqueName: \"kubernetes.io/projected/b53580c1-0cca-4b9a-b478-0fd7e888f00e-kube-api-access-wx6xz\") pod \"nmstate-console-plugin-5874bd7bc5-q7tqc\" (UID: \"b53580c1-0cca-4b9a-b478-0fd7e888f00e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.655826 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/98418653-34b1-4992-97a1-ccd79bbaff55-ovs-socket\") pod \"nmstate-handler-fjscb\" (UID: \"98418653-34b1-4992-97a1-ccd79bbaff55\") " pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.655860 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/98418653-34b1-4992-97a1-ccd79bbaff55-nmstate-lock\") pod \"nmstate-handler-fjscb\" (UID: \"98418653-34b1-4992-97a1-ccd79bbaff55\") " pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: E1125 06:56:49.655928 4482 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 25 06:56:49 crc kubenswrapper[4482]: E1125 06:56:49.655962 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdbc8692-6df7-4640-8c66-a50e3df8b9d2-tls-key-pair podName:cdbc8692-6df7-4640-8c66-a50e3df8b9d2 nodeName:}" failed. No retries permitted until 2025-11-25 06:56:50.155948547 +0000 UTC m=+584.644179806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/cdbc8692-6df7-4640-8c66-a50e3df8b9d2-tls-key-pair") pod "nmstate-webhook-6b89b748d8-7w2b9" (UID: "cdbc8692-6df7-4640-8c66-a50e3df8b9d2") : secret "openshift-nmstate-webhook" not found Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.656021 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/98418653-34b1-4992-97a1-ccd79bbaff55-dbus-socket\") pod \"nmstate-handler-fjscb\" (UID: \"98418653-34b1-4992-97a1-ccd79bbaff55\") " pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.683976 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcpln\" (UniqueName: \"kubernetes.io/projected/cdbc8692-6df7-4640-8c66-a50e3df8b9d2-kube-api-access-xcpln\") pod \"nmstate-webhook-6b89b748d8-7w2b9\" (UID: \"cdbc8692-6df7-4640-8c66-a50e3df8b9d2\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.684005 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6xzr\" (UniqueName: \"kubernetes.io/projected/6d248756-a13d-460a-a7d9-a64a0ca71baa-kube-api-access-f6xzr\") pod \"nmstate-metrics-5dcf9c57c5-bk5ck\" (UID: \"6d248756-a13d-460a-a7d9-a64a0ca71baa\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-bk5ck" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.687490 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-bk5ck" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.688611 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v966f\" (UniqueName: \"kubernetes.io/projected/98418653-34b1-4992-97a1-ccd79bbaff55-kube-api-access-v966f\") pod \"nmstate-handler-fjscb\" (UID: \"98418653-34b1-4992-97a1-ccd79bbaff55\") " pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.696719 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx6xz\" (UniqueName: \"kubernetes.io/projected/b53580c1-0cca-4b9a-b478-0fd7e888f00e-kube-api-access-wx6xz\") pod \"nmstate-console-plugin-5874bd7bc5-q7tqc\" (UID: \"b53580c1-0cca-4b9a-b478-0fd7e888f00e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.729448 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.773299 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6c9948bc4b-xgwbm"] Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.774126 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.787664 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c9948bc4b-xgwbm"] Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.875639 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c2198bd-3211-4b64-959c-f0d0360cb71d-oauth-serving-cert\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.875989 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2198bd-3211-4b64-959c-f0d0360cb71d-console-serving-cert\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.876102 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c2198bd-3211-4b64-959c-f0d0360cb71d-trusted-ca-bundle\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.876135 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c2198bd-3211-4b64-959c-f0d0360cb71d-console-config\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.876165 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c2198bd-3211-4b64-959c-f0d0360cb71d-service-ca\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.876219 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c2198bd-3211-4b64-959c-f0d0360cb71d-console-oauth-config\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.876237 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgpdb\" (UniqueName: \"kubernetes.io/projected/3c2198bd-3211-4b64-959c-f0d0360cb71d-kube-api-access-vgpdb\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.964741 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-bk5ck"] Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.977879 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c2198bd-3211-4b64-959c-f0d0360cb71d-service-ca\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.977949 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c2198bd-3211-4b64-959c-f0d0360cb71d-console-oauth-config\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.977980 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgpdb\" (UniqueName: \"kubernetes.io/projected/3c2198bd-3211-4b64-959c-f0d0360cb71d-kube-api-access-vgpdb\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.978036 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c2198bd-3211-4b64-959c-f0d0360cb71d-oauth-serving-cert\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.978132 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2198bd-3211-4b64-959c-f0d0360cb71d-console-serving-cert\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.978223 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c2198bd-3211-4b64-959c-f0d0360cb71d-trusted-ca-bundle\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.978262 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c2198bd-3211-4b64-959c-f0d0360cb71d-console-config\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.979307 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c2198bd-3211-4b64-959c-f0d0360cb71d-console-config\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.979731 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c2198bd-3211-4b64-959c-f0d0360cb71d-oauth-serving-cert\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.980152 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c2198bd-3211-4b64-959c-f0d0360cb71d-trusted-ca-bundle\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.981024 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c2198bd-3211-4b64-959c-f0d0360cb71d-service-ca\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.985930 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c2198bd-3211-4b64-959c-f0d0360cb71d-console-serving-cert\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:49 crc kubenswrapper[4482]: I1125 06:56:49.985934 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c2198bd-3211-4b64-959c-f0d0360cb71d-console-oauth-config\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:49.994600 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgpdb\" (UniqueName: \"kubernetes.io/projected/3c2198bd-3211-4b64-959c-f0d0360cb71d-kube-api-access-vgpdb\") pod \"console-6c9948bc4b-xgwbm\" (UID: \"3c2198bd-3211-4b64-959c-f0d0360cb71d\") " pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.114684 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.181088 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cdbc8692-6df7-4640-8c66-a50e3df8b9d2-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-7w2b9\" (UID: \"cdbc8692-6df7-4640-8c66-a50e3df8b9d2\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.186492 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cdbc8692-6df7-4640-8c66-a50e3df8b9d2-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-7w2b9\" (UID: \"cdbc8692-6df7-4640-8c66-a50e3df8b9d2\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.285568 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c9948bc4b-xgwbm"] Nov 25 06:56:50 crc kubenswrapper[4482]: W1125 06:56:50.290556 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c2198bd_3211_4b64_959c_f0d0360cb71d.slice/crio-391f4952498d0a0546dcf0cded19fc4bce22267bd3b51b110490eb0b673ea97d WatchSource:0}: Error finding container 391f4952498d0a0546dcf0cded19fc4bce22267bd3b51b110490eb0b673ea97d: Status 404 returned error can't find the container with id 391f4952498d0a0546dcf0cded19fc4bce22267bd3b51b110490eb0b673ea97d Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.296715 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.401431 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.401475 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.406945 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b53580c1-0cca-4b9a-b478-0fd7e888f00e-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-q7tqc\" (UID: \"b53580c1-0cca-4b9a-b478-0fd7e888f00e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.410302 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b53580c1-0cca-4b9a-b478-0fd7e888f00e-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-q7tqc\" (UID: \"b53580c1-0cca-4b9a-b478-0fd7e888f00e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.550782 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fjscb" event={"ID":"98418653-34b1-4992-97a1-ccd79bbaff55","Type":"ContainerStarted","Data":"53635397624b589c39a60109b82a2337222090a1c7af54796af9a8a0686c92e1"} Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.552882 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c9948bc4b-xgwbm" event={"ID":"3c2198bd-3211-4b64-959c-f0d0360cb71d","Type":"ContainerStarted","Data":"d4c149d9eb526e8498b884b307953372a9128b520b39e226ed23d9cf95a4139e"} Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.552998 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c9948bc4b-xgwbm" event={"ID":"3c2198bd-3211-4b64-959c-f0d0360cb71d","Type":"ContainerStarted","Data":"391f4952498d0a0546dcf0cded19fc4bce22267bd3b51b110490eb0b673ea97d"} Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.555368 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-bk5ck" event={"ID":"6d248756-a13d-460a-a7d9-a64a0ca71baa","Type":"ContainerStarted","Data":"079b040beb8823561e1497f32a8a02da2abd8e1684288d6409f6da32166ffcd3"} Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.568489 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6c9948bc4b-xgwbm" podStartSLOduration=1.568470156 podStartE2EDuration="1.568470156s" podCreationTimestamp="2025-11-25 06:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:56:50.564464122 +0000 UTC m=+585.052695371" watchObservedRunningTime="2025-11-25 06:56:50.568470156 +0000 UTC m=+585.056701416" Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.664290 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9"] Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.872421 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-tb7vq" Nov 25 06:56:50 crc kubenswrapper[4482]: I1125 06:56:50.877653 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" Nov 25 06:56:51 crc kubenswrapper[4482]: I1125 06:56:51.241611 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc"] Nov 25 06:56:51 crc kubenswrapper[4482]: W1125 06:56:51.245425 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb53580c1_0cca_4b9a_b478_0fd7e888f00e.slice/crio-4f0bf5e3d98f6df95189a9c13df19955531664759e34200e88e6a6e85f3149b5 WatchSource:0}: Error finding container 4f0bf5e3d98f6df95189a9c13df19955531664759e34200e88e6a6e85f3149b5: Status 404 returned error can't find the container with id 4f0bf5e3d98f6df95189a9c13df19955531664759e34200e88e6a6e85f3149b5 Nov 25 06:56:51 crc kubenswrapper[4482]: I1125 06:56:51.562375 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" event={"ID":"cdbc8692-6df7-4640-8c66-a50e3df8b9d2","Type":"ContainerStarted","Data":"db37001e4185a7603e5ea4c8813fd0efbe027dae6e26faefdabdb3d63ca74fe2"} Nov 25 06:56:51 crc kubenswrapper[4482]: I1125 06:56:51.564110 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" event={"ID":"b53580c1-0cca-4b9a-b478-0fd7e888f00e","Type":"ContainerStarted","Data":"4f0bf5e3d98f6df95189a9c13df19955531664759e34200e88e6a6e85f3149b5"} Nov 25 06:56:53 crc kubenswrapper[4482]: I1125 06:56:53.584907 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" event={"ID":"cdbc8692-6df7-4640-8c66-a50e3df8b9d2","Type":"ContainerStarted","Data":"bbc53b15b6cb9e3b45c9d4406550ebb63bbb0c07310b6c68f6d5302503b8c80a"} Nov 25 06:56:53 crc kubenswrapper[4482]: I1125 06:56:53.585336 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" Nov 25 06:56:53 crc kubenswrapper[4482]: I1125 06:56:53.588770 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-bk5ck" event={"ID":"6d248756-a13d-460a-a7d9-a64a0ca71baa","Type":"ContainerStarted","Data":"5c3937c57e6d054d18bf10804183b73734f0d0fc6a518e193f93576f72265ea2"} Nov 25 06:56:53 crc kubenswrapper[4482]: I1125 06:56:53.590254 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fjscb" event={"ID":"98418653-34b1-4992-97a1-ccd79bbaff55","Type":"ContainerStarted","Data":"a03d6cea049b72b96604d4e9168cdb0c1393a911b2160da283dbd230d3a10d15"} Nov 25 06:56:53 crc kubenswrapper[4482]: I1125 06:56:53.590405 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:56:53 crc kubenswrapper[4482]: I1125 06:56:53.599826 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" podStartSLOduration=2.7405112259999997 podStartE2EDuration="4.599810485s" podCreationTimestamp="2025-11-25 06:56:49 +0000 UTC" firstStartedPulling="2025-11-25 06:56:50.673405213 +0000 UTC m=+585.161636472" lastFinishedPulling="2025-11-25 06:56:52.532704472 +0000 UTC m=+587.020935731" observedRunningTime="2025-11-25 06:56:53.598818894 +0000 UTC m=+588.087050153" watchObservedRunningTime="2025-11-25 06:56:53.599810485 +0000 UTC m=+588.088041743" Nov 25 06:56:53 crc kubenswrapper[4482]: I1125 06:56:53.620496 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-fjscb" podStartSLOduration=1.9062851159999998 podStartE2EDuration="4.620482066s" podCreationTimestamp="2025-11-25 06:56:49 +0000 UTC" firstStartedPulling="2025-11-25 06:56:49.825615186 +0000 UTC m=+584.313846445" lastFinishedPulling="2025-11-25 06:56:52.539812136 +0000 UTC m=+587.028043395" observedRunningTime="2025-11-25 06:56:53.615518706 +0000 UTC m=+588.103749964" watchObservedRunningTime="2025-11-25 06:56:53.620482066 +0000 UTC m=+588.108713324" Nov 25 06:56:54 crc kubenswrapper[4482]: I1125 06:56:54.598926 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" event={"ID":"b53580c1-0cca-4b9a-b478-0fd7e888f00e","Type":"ContainerStarted","Data":"37519106239f933909d6d46c871f1ad96ec332269d67dfd2cefb6655ba03bcfb"} Nov 25 06:56:54 crc kubenswrapper[4482]: I1125 06:56:54.618413 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-q7tqc" podStartSLOduration=3.40076536 podStartE2EDuration="5.618397959s" podCreationTimestamp="2025-11-25 06:56:49 +0000 UTC" firstStartedPulling="2025-11-25 06:56:51.247381492 +0000 UTC m=+585.735612751" lastFinishedPulling="2025-11-25 06:56:53.465014091 +0000 UTC m=+587.953245350" observedRunningTime="2025-11-25 06:56:54.613262995 +0000 UTC m=+589.101494245" watchObservedRunningTime="2025-11-25 06:56:54.618397959 +0000 UTC m=+589.106629218" Nov 25 06:56:55 crc kubenswrapper[4482]: I1125 06:56:55.607931 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-bk5ck" event={"ID":"6d248756-a13d-460a-a7d9-a64a0ca71baa","Type":"ContainerStarted","Data":"c87a0063f73dedf98db13834ec43227741dbe4766b9fa357a01eef51d10f154f"} Nov 25 06:56:55 crc kubenswrapper[4482]: I1125 06:56:55.623130 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-bk5ck" podStartSLOduration=1.343110413 podStartE2EDuration="6.623117152s" podCreationTimestamp="2025-11-25 06:56:49 +0000 UTC" firstStartedPulling="2025-11-25 06:56:49.974857759 +0000 UTC m=+584.463089018" lastFinishedPulling="2025-11-25 06:56:55.254864498 +0000 UTC m=+589.743095757" observedRunningTime="2025-11-25 06:56:55.620263501 +0000 UTC m=+590.108494750" watchObservedRunningTime="2025-11-25 06:56:55.623117152 +0000 UTC m=+590.111348411" Nov 25 06:56:59 crc kubenswrapper[4482]: I1125 06:56:59.748571 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-fjscb" Nov 25 06:57:00 crc kubenswrapper[4482]: I1125 06:57:00.114968 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:57:00 crc kubenswrapper[4482]: I1125 06:57:00.115307 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:57:00 crc kubenswrapper[4482]: I1125 06:57:00.120142 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:57:00 crc kubenswrapper[4482]: I1125 06:57:00.634100 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c9948bc4b-xgwbm" Nov 25 06:57:00 crc kubenswrapper[4482]: I1125 06:57:00.671581 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gqc49"] Nov 25 06:57:10 crc kubenswrapper[4482]: I1125 06:57:10.300758 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7w2b9" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.098341 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94"] Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.099829 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.101921 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.109723 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94"] Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.111490 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70d65622-7a21-472a-ab05-38b37d529801-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94\" (UID: \"70d65622-7a21-472a-ab05-38b37d529801\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.111524 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70d65622-7a21-472a-ab05-38b37d529801-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94\" (UID: \"70d65622-7a21-472a-ab05-38b37d529801\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.111590 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97jhl\" (UniqueName: \"kubernetes.io/projected/70d65622-7a21-472a-ab05-38b37d529801-kube-api-access-97jhl\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94\" (UID: \"70d65622-7a21-472a-ab05-38b37d529801\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.212541 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70d65622-7a21-472a-ab05-38b37d529801-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94\" (UID: \"70d65622-7a21-472a-ab05-38b37d529801\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.212581 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70d65622-7a21-472a-ab05-38b37d529801-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94\" (UID: \"70d65622-7a21-472a-ab05-38b37d529801\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.212617 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97jhl\" (UniqueName: \"kubernetes.io/projected/70d65622-7a21-472a-ab05-38b37d529801-kube-api-access-97jhl\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94\" (UID: \"70d65622-7a21-472a-ab05-38b37d529801\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.213329 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70d65622-7a21-472a-ab05-38b37d529801-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94\" (UID: \"70d65622-7a21-472a-ab05-38b37d529801\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.213354 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70d65622-7a21-472a-ab05-38b37d529801-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94\" (UID: \"70d65622-7a21-472a-ab05-38b37d529801\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.230805 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97jhl\" (UniqueName: \"kubernetes.io/projected/70d65622-7a21-472a-ab05-38b37d529801-kube-api-access-97jhl\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94\" (UID: \"70d65622-7a21-472a-ab05-38b37d529801\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.414840 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:20 crc kubenswrapper[4482]: I1125 06:57:20.790248 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94"] Nov 25 06:57:21 crc kubenswrapper[4482]: I1125 06:57:21.749198 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" event={"ID":"70d65622-7a21-472a-ab05-38b37d529801","Type":"ContainerDied","Data":"38c928cc20f78e566a2a06dcdacfbc9239cb2017c7aaf2466a661f1c4874ea31"} Nov 25 06:57:21 crc kubenswrapper[4482]: I1125 06:57:21.750492 4482 generic.go:334] "Generic (PLEG): container finished" podID="70d65622-7a21-472a-ab05-38b37d529801" containerID="38c928cc20f78e566a2a06dcdacfbc9239cb2017c7aaf2466a661f1c4874ea31" exitCode=0 Nov 25 06:57:21 crc kubenswrapper[4482]: I1125 06:57:21.750576 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" event={"ID":"70d65622-7a21-472a-ab05-38b37d529801","Type":"ContainerStarted","Data":"aee43f2821006c5dcedb7c3dbf074ae3b17f10a5389b4eca7032c0d7c109ca2f"} Nov 25 06:57:23 crc kubenswrapper[4482]: I1125 06:57:23.761669 4482 generic.go:334] "Generic (PLEG): container finished" podID="70d65622-7a21-472a-ab05-38b37d529801" containerID="8c344b4404361bcf55d1dc28122c07e723cd97abe58f4eb5fd9fdd660150f750" exitCode=0 Nov 25 06:57:23 crc kubenswrapper[4482]: I1125 06:57:23.761767 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" event={"ID":"70d65622-7a21-472a-ab05-38b37d529801","Type":"ContainerDied","Data":"8c344b4404361bcf55d1dc28122c07e723cd97abe58f4eb5fd9fdd660150f750"} Nov 25 06:57:24 crc kubenswrapper[4482]: I1125 06:57:24.768431 4482 generic.go:334] "Generic (PLEG): container finished" podID="70d65622-7a21-472a-ab05-38b37d529801" containerID="a9482096ed9bd2ca75e2559b45c0a8a91128b521ae84e7b0692b1532dc432565" exitCode=0 Nov 25 06:57:24 crc kubenswrapper[4482]: I1125 06:57:24.768532 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" event={"ID":"70d65622-7a21-472a-ab05-38b37d529801","Type":"ContainerDied","Data":"a9482096ed9bd2ca75e2559b45c0a8a91128b521ae84e7b0692b1532dc432565"} Nov 25 06:57:25 crc kubenswrapper[4482]: I1125 06:57:25.700366 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-gqc49" podUID="368e9f64-0e31-464e-9714-b4b3ea73cc36" containerName="console" containerID="cri-o://5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb" gracePeriod=15 Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.019254 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.022427 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gqc49_368e9f64-0e31-464e-9714-b4b3ea73cc36/console/0.log" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.022518 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.083509 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97jhl\" (UniqueName: \"kubernetes.io/projected/70d65622-7a21-472a-ab05-38b37d529801-kube-api-access-97jhl\") pod \"70d65622-7a21-472a-ab05-38b37d529801\" (UID: \"70d65622-7a21-472a-ab05-38b37d529801\") " Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.083588 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-oauth-config\") pod \"368e9f64-0e31-464e-9714-b4b3ea73cc36\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.083636 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70d65622-7a21-472a-ab05-38b37d529801-util\") pod \"70d65622-7a21-472a-ab05-38b37d529801\" (UID: \"70d65622-7a21-472a-ab05-38b37d529801\") " Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.083667 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-oauth-serving-cert\") pod \"368e9f64-0e31-464e-9714-b4b3ea73cc36\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.083685 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-trusted-ca-bundle\") pod \"368e9f64-0e31-464e-9714-b4b3ea73cc36\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.083760 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-config\") pod \"368e9f64-0e31-464e-9714-b4b3ea73cc36\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.084970 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "368e9f64-0e31-464e-9714-b4b3ea73cc36" (UID: "368e9f64-0e31-464e-9714-b4b3ea73cc36"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.085099 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "368e9f64-0e31-464e-9714-b4b3ea73cc36" (UID: "368e9f64-0e31-464e-9714-b4b3ea73cc36"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.085213 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-config" (OuterVolumeSpecName: "console-config") pod "368e9f64-0e31-464e-9714-b4b3ea73cc36" (UID: "368e9f64-0e31-464e-9714-b4b3ea73cc36"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.090763 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d65622-7a21-472a-ab05-38b37d529801-kube-api-access-97jhl" (OuterVolumeSpecName: "kube-api-access-97jhl") pod "70d65622-7a21-472a-ab05-38b37d529801" (UID: "70d65622-7a21-472a-ab05-38b37d529801"). InnerVolumeSpecName "kube-api-access-97jhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.091613 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "368e9f64-0e31-464e-9714-b4b3ea73cc36" (UID: "368e9f64-0e31-464e-9714-b4b3ea73cc36"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.184614 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70d65622-7a21-472a-ab05-38b37d529801-bundle\") pod \"70d65622-7a21-472a-ab05-38b37d529801\" (UID: \"70d65622-7a21-472a-ab05-38b37d529801\") " Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.184683 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-service-ca\") pod \"368e9f64-0e31-464e-9714-b4b3ea73cc36\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.184715 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-serving-cert\") pod \"368e9f64-0e31-464e-9714-b4b3ea73cc36\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.184746 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9zw8\" (UniqueName: \"kubernetes.io/projected/368e9f64-0e31-464e-9714-b4b3ea73cc36-kube-api-access-z9zw8\") pod \"368e9f64-0e31-464e-9714-b4b3ea73cc36\" (UID: \"368e9f64-0e31-464e-9714-b4b3ea73cc36\") " Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.185252 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97jhl\" (UniqueName: \"kubernetes.io/projected/70d65622-7a21-472a-ab05-38b37d529801-kube-api-access-97jhl\") on node \"crc\" DevicePath \"\"" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.185272 4482 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.185283 4482 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.185291 4482 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.185301 4482 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.185318 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-service-ca" (OuterVolumeSpecName: "service-ca") pod "368e9f64-0e31-464e-9714-b4b3ea73cc36" (UID: "368e9f64-0e31-464e-9714-b4b3ea73cc36"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.186043 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70d65622-7a21-472a-ab05-38b37d529801-bundle" (OuterVolumeSpecName: "bundle") pod "70d65622-7a21-472a-ab05-38b37d529801" (UID: "70d65622-7a21-472a-ab05-38b37d529801"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.188493 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "368e9f64-0e31-464e-9714-b4b3ea73cc36" (UID: "368e9f64-0e31-464e-9714-b4b3ea73cc36"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.188575 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368e9f64-0e31-464e-9714-b4b3ea73cc36-kube-api-access-z9zw8" (OuterVolumeSpecName: "kube-api-access-z9zw8") pod "368e9f64-0e31-464e-9714-b4b3ea73cc36" (UID: "368e9f64-0e31-464e-9714-b4b3ea73cc36"). InnerVolumeSpecName "kube-api-access-z9zw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.258774 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70d65622-7a21-472a-ab05-38b37d529801-util" (OuterVolumeSpecName: "util") pod "70d65622-7a21-472a-ab05-38b37d529801" (UID: "70d65622-7a21-472a-ab05-38b37d529801"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.285837 4482 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70d65622-7a21-472a-ab05-38b37d529801-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.285940 4482 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/368e9f64-0e31-464e-9714-b4b3ea73cc36-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.286002 4482 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/368e9f64-0e31-464e-9714-b4b3ea73cc36-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.286058 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9zw8\" (UniqueName: \"kubernetes.io/projected/368e9f64-0e31-464e-9714-b4b3ea73cc36-kube-api-access-z9zw8\") on node \"crc\" DevicePath \"\"" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.286108 4482 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70d65622-7a21-472a-ab05-38b37d529801-util\") on node \"crc\" DevicePath \"\"" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.786460 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.786504 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qjl94" event={"ID":"70d65622-7a21-472a-ab05-38b37d529801","Type":"ContainerDied","Data":"aee43f2821006c5dcedb7c3dbf074ae3b17f10a5389b4eca7032c0d7c109ca2f"} Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.787539 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aee43f2821006c5dcedb7c3dbf074ae3b17f10a5389b4eca7032c0d7c109ca2f" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.788234 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gqc49_368e9f64-0e31-464e-9714-b4b3ea73cc36/console/0.log" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.788335 4482 generic.go:334] "Generic (PLEG): container finished" podID="368e9f64-0e31-464e-9714-b4b3ea73cc36" containerID="5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb" exitCode=2 Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.788403 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gqc49" event={"ID":"368e9f64-0e31-464e-9714-b4b3ea73cc36","Type":"ContainerDied","Data":"5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb"} Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.788461 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gqc49" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.788492 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gqc49" event={"ID":"368e9f64-0e31-464e-9714-b4b3ea73cc36","Type":"ContainerDied","Data":"c593d278ac111fc337697164b4be24933956472aeca1f245f9690a4dd1d5a28d"} Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.788517 4482 scope.go:117] "RemoveContainer" containerID="5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.822190 4482 scope.go:117] "RemoveContainer" containerID="5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb" Nov 25 06:57:26 crc kubenswrapper[4482]: E1125 06:57:26.822597 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb\": container with ID starting with 5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb not found: ID does not exist" containerID="5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.822634 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb"} err="failed to get container status \"5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb\": rpc error: code = NotFound desc = could not find container \"5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb\": container with ID starting with 5f024aa45c426091a75ad57d34f1f178e461d078a8c54717cd7d78e0badf58eb not found: ID does not exist" Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.823052 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gqc49"] Nov 25 06:57:26 crc kubenswrapper[4482]: I1125 06:57:26.826286 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-gqc49"] Nov 25 06:57:27 crc kubenswrapper[4482]: I1125 06:57:27.837974 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="368e9f64-0e31-464e-9714-b4b3ea73cc36" path="/var/lib/kubelet/pods/368e9f64-0e31-464e-9714-b4b3ea73cc36/volumes" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.850362 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896"] Nov 25 06:57:36 crc kubenswrapper[4482]: E1125 06:57:36.850760 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d65622-7a21-472a-ab05-38b37d529801" containerName="pull" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.850773 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d65622-7a21-472a-ab05-38b37d529801" containerName="pull" Nov 25 06:57:36 crc kubenswrapper[4482]: E1125 06:57:36.850783 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d65622-7a21-472a-ab05-38b37d529801" containerName="extract" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.850789 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d65622-7a21-472a-ab05-38b37d529801" containerName="extract" Nov 25 06:57:36 crc kubenswrapper[4482]: E1125 06:57:36.850803 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d65622-7a21-472a-ab05-38b37d529801" containerName="util" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.850808 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d65622-7a21-472a-ab05-38b37d529801" containerName="util" Nov 25 06:57:36 crc kubenswrapper[4482]: E1125 06:57:36.850819 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368e9f64-0e31-464e-9714-b4b3ea73cc36" containerName="console" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.850825 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="368e9f64-0e31-464e-9714-b4b3ea73cc36" containerName="console" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.850908 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d65622-7a21-472a-ab05-38b37d529801" containerName="extract" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.850917 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="368e9f64-0e31-464e-9714-b4b3ea73cc36" containerName="console" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.851256 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.854226 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.855306 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.855495 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zd8xz" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.856319 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.856652 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.904288 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896"] Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.934907 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61f162c1-bcc6-4098-86f3-7cff5790a2f3-webhook-cert\") pod \"metallb-operator-controller-manager-6b7b9ccd57-7v896\" (UID: \"61f162c1-bcc6-4098-86f3-7cff5790a2f3\") " pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.934963 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vt9\" (UniqueName: \"kubernetes.io/projected/61f162c1-bcc6-4098-86f3-7cff5790a2f3-kube-api-access-n7vt9\") pod \"metallb-operator-controller-manager-6b7b9ccd57-7v896\" (UID: \"61f162c1-bcc6-4098-86f3-7cff5790a2f3\") " pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:57:36 crc kubenswrapper[4482]: I1125 06:57:36.935018 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61f162c1-bcc6-4098-86f3-7cff5790a2f3-apiservice-cert\") pod \"metallb-operator-controller-manager-6b7b9ccd57-7v896\" (UID: \"61f162c1-bcc6-4098-86f3-7cff5790a2f3\") " pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.009464 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-785dccc789-hknk8"] Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.010242 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.011893 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-bfsvd" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.013239 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.014743 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.032273 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-785dccc789-hknk8"] Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.035865 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61f162c1-bcc6-4098-86f3-7cff5790a2f3-webhook-cert\") pod \"metallb-operator-controller-manager-6b7b9ccd57-7v896\" (UID: \"61f162c1-bcc6-4098-86f3-7cff5790a2f3\") " pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.035916 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7vt9\" (UniqueName: \"kubernetes.io/projected/61f162c1-bcc6-4098-86f3-7cff5790a2f3-kube-api-access-n7vt9\") pod \"metallb-operator-controller-manager-6b7b9ccd57-7v896\" (UID: \"61f162c1-bcc6-4098-86f3-7cff5790a2f3\") " pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.036152 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61f162c1-bcc6-4098-86f3-7cff5790a2f3-apiservice-cert\") pod \"metallb-operator-controller-manager-6b7b9ccd57-7v896\" (UID: \"61f162c1-bcc6-4098-86f3-7cff5790a2f3\") " pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.041490 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61f162c1-bcc6-4098-86f3-7cff5790a2f3-webhook-cert\") pod \"metallb-operator-controller-manager-6b7b9ccd57-7v896\" (UID: \"61f162c1-bcc6-4098-86f3-7cff5790a2f3\") " pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.042532 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61f162c1-bcc6-4098-86f3-7cff5790a2f3-apiservice-cert\") pod \"metallb-operator-controller-manager-6b7b9ccd57-7v896\" (UID: \"61f162c1-bcc6-4098-86f3-7cff5790a2f3\") " pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.059800 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7vt9\" (UniqueName: \"kubernetes.io/projected/61f162c1-bcc6-4098-86f3-7cff5790a2f3-kube-api-access-n7vt9\") pod \"metallb-operator-controller-manager-6b7b9ccd57-7v896\" (UID: \"61f162c1-bcc6-4098-86f3-7cff5790a2f3\") " pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.136989 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/20214beb-af93-45bb-8735-168e034105e3-webhook-cert\") pod \"metallb-operator-webhook-server-785dccc789-hknk8\" (UID: \"20214beb-af93-45bb-8735-168e034105e3\") " pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.137040 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdwkf\" (UniqueName: \"kubernetes.io/projected/20214beb-af93-45bb-8735-168e034105e3-kube-api-access-hdwkf\") pod \"metallb-operator-webhook-server-785dccc789-hknk8\" (UID: \"20214beb-af93-45bb-8735-168e034105e3\") " pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.137093 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/20214beb-af93-45bb-8735-168e034105e3-apiservice-cert\") pod \"metallb-operator-webhook-server-785dccc789-hknk8\" (UID: \"20214beb-af93-45bb-8735-168e034105e3\") " pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.165437 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.238685 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/20214beb-af93-45bb-8735-168e034105e3-apiservice-cert\") pod \"metallb-operator-webhook-server-785dccc789-hknk8\" (UID: \"20214beb-af93-45bb-8735-168e034105e3\") " pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.238750 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/20214beb-af93-45bb-8735-168e034105e3-webhook-cert\") pod \"metallb-operator-webhook-server-785dccc789-hknk8\" (UID: \"20214beb-af93-45bb-8735-168e034105e3\") " pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.238790 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdwkf\" (UniqueName: \"kubernetes.io/projected/20214beb-af93-45bb-8735-168e034105e3-kube-api-access-hdwkf\") pod \"metallb-operator-webhook-server-785dccc789-hknk8\" (UID: \"20214beb-af93-45bb-8735-168e034105e3\") " pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.243558 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/20214beb-af93-45bb-8735-168e034105e3-webhook-cert\") pod \"metallb-operator-webhook-server-785dccc789-hknk8\" (UID: \"20214beb-af93-45bb-8735-168e034105e3\") " pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.245193 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/20214beb-af93-45bb-8735-168e034105e3-apiservice-cert\") pod \"metallb-operator-webhook-server-785dccc789-hknk8\" (UID: \"20214beb-af93-45bb-8735-168e034105e3\") " pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.271564 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdwkf\" (UniqueName: \"kubernetes.io/projected/20214beb-af93-45bb-8735-168e034105e3-kube-api-access-hdwkf\") pod \"metallb-operator-webhook-server-785dccc789-hknk8\" (UID: \"20214beb-af93-45bb-8735-168e034105e3\") " pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.321699 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.396602 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896"] Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.715721 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-785dccc789-hknk8"] Nov 25 06:57:37 crc kubenswrapper[4482]: W1125 06:57:37.721995 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20214beb_af93_45bb_8735_168e034105e3.slice/crio-88ef4b0182bc7df5c5263330e64c7d055851368ff3a2898bad98cdf8e2c3e5bc WatchSource:0}: Error finding container 88ef4b0182bc7df5c5263330e64c7d055851368ff3a2898bad98cdf8e2c3e5bc: Status 404 returned error can't find the container with id 88ef4b0182bc7df5c5263330e64c7d055851368ff3a2898bad98cdf8e2c3e5bc Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.852039 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" event={"ID":"61f162c1-bcc6-4098-86f3-7cff5790a2f3","Type":"ContainerStarted","Data":"2ee28ae0d7b5ff129faba151c1406d05b5864f0c38bd8143625cc447846d4ee7"} Nov 25 06:57:37 crc kubenswrapper[4482]: I1125 06:57:37.853589 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" event={"ID":"20214beb-af93-45bb-8735-168e034105e3","Type":"ContainerStarted","Data":"88ef4b0182bc7df5c5263330e64c7d055851368ff3a2898bad98cdf8e2c3e5bc"} Nov 25 06:57:40 crc kubenswrapper[4482]: I1125 06:57:40.886229 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" event={"ID":"61f162c1-bcc6-4098-86f3-7cff5790a2f3","Type":"ContainerStarted","Data":"3fedc62076db9368642d2882fd4055597903be784417326620b567b4d622fa8d"} Nov 25 06:57:40 crc kubenswrapper[4482]: I1125 06:57:40.886594 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:57:40 crc kubenswrapper[4482]: I1125 06:57:40.908060 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" podStartSLOduration=1.637388346 podStartE2EDuration="4.908039248s" podCreationTimestamp="2025-11-25 06:57:36 +0000 UTC" firstStartedPulling="2025-11-25 06:57:37.419203363 +0000 UTC m=+631.907434622" lastFinishedPulling="2025-11-25 06:57:40.689854266 +0000 UTC m=+635.178085524" observedRunningTime="2025-11-25 06:57:40.906135448 +0000 UTC m=+635.394366707" watchObservedRunningTime="2025-11-25 06:57:40.908039248 +0000 UTC m=+635.396270507" Nov 25 06:57:43 crc kubenswrapper[4482]: I1125 06:57:43.903008 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" event={"ID":"20214beb-af93-45bb-8735-168e034105e3","Type":"ContainerStarted","Data":"5f0218cfe38cf24bade54d6edac15c95627880cff01d253af74a61cd10f9efd8"} Nov 25 06:57:43 crc kubenswrapper[4482]: I1125 06:57:43.903640 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:57:43 crc kubenswrapper[4482]: I1125 06:57:43.946212 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" podStartSLOduration=2.757709087 podStartE2EDuration="7.946190353s" podCreationTimestamp="2025-11-25 06:57:36 +0000 UTC" firstStartedPulling="2025-11-25 06:57:37.724635003 +0000 UTC m=+632.212866262" lastFinishedPulling="2025-11-25 06:57:42.913116279 +0000 UTC m=+637.401347528" observedRunningTime="2025-11-25 06:57:43.935203795 +0000 UTC m=+638.423435054" watchObservedRunningTime="2025-11-25 06:57:43.946190353 +0000 UTC m=+638.434421612" Nov 25 06:57:57 crc kubenswrapper[4482]: I1125 06:57:57.327841 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-785dccc789-hknk8" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.169478 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.750677 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7"] Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.751334 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.752940 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zvdsz" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.753623 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.756331 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zq9tk"] Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.758242 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.765546 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.766361 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.776519 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7"] Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.853127 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gqfwd"] Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.854671 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gqfwd" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.861282 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-drr8w" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.861519 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.861753 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.861888 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.875208 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-hjx66"] Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.876645 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.882307 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.891955 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-hjx66"] Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.901725 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vnsx\" (UniqueName: \"kubernetes.io/projected/c007ba51-6685-419c-ad1e-0832056671fc-kube-api-access-4vnsx\") pod \"frr-k8s-webhook-server-6998585d5-qvnx7\" (UID: \"c007ba51-6685-419c-ad1e-0832056671fc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.901761 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-frr-sockets\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.901796 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-reloader\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.901815 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-frr-startup\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.901835 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c007ba51-6685-419c-ad1e-0832056671fc-cert\") pod \"frr-k8s-webhook-server-6998585d5-qvnx7\" (UID: \"c007ba51-6685-419c-ad1e-0832056671fc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.901853 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-metrics\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.901883 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-frr-conf\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.901926 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4bn\" (UniqueName: \"kubernetes.io/projected/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-kube-api-access-lc4bn\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:17 crc kubenswrapper[4482]: I1125 06:58:17.901948 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-metrics-certs\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003220 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-metrics-certs\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003373 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vnsx\" (UniqueName: \"kubernetes.io/projected/c007ba51-6685-419c-ad1e-0832056671fc-kube-api-access-4vnsx\") pod \"frr-k8s-webhook-server-6998585d5-qvnx7\" (UID: \"c007ba51-6685-419c-ad1e-0832056671fc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003425 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-frr-sockets\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003458 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/060b17d5-e982-4320-869e-99ca2727296a-metrics-certs\") pod \"controller-6c7b4b5f48-hjx66\" (UID: \"060b17d5-e982-4320-869e-99ca2727296a\") " pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003499 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flbf4\" (UniqueName: \"kubernetes.io/projected/daa60e38-de6c-4144-8ca3-1e35de41eb28-kube-api-access-flbf4\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003518 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlx7g\" (UniqueName: \"kubernetes.io/projected/060b17d5-e982-4320-869e-99ca2727296a-kube-api-access-nlx7g\") pod \"controller-6c7b4b5f48-hjx66\" (UID: \"060b17d5-e982-4320-869e-99ca2727296a\") " pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003555 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-reloader\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003574 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-frr-startup\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003597 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c007ba51-6685-419c-ad1e-0832056671fc-cert\") pod \"frr-k8s-webhook-server-6998585d5-qvnx7\" (UID: \"c007ba51-6685-419c-ad1e-0832056671fc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003616 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-metrics\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003671 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-frr-conf\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003741 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/060b17d5-e982-4320-869e-99ca2727296a-cert\") pod \"controller-6c7b4b5f48-hjx66\" (UID: \"060b17d5-e982-4320-869e-99ca2727296a\") " pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003772 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/daa60e38-de6c-4144-8ca3-1e35de41eb28-metallb-excludel2\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003809 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daa60e38-de6c-4144-8ca3-1e35de41eb28-metrics-certs\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003827 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/daa60e38-de6c-4144-8ca3-1e35de41eb28-memberlist\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.003848 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc4bn\" (UniqueName: \"kubernetes.io/projected/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-kube-api-access-lc4bn\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.005293 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-frr-conf\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.005385 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-reloader\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: E1125 06:58:18.005548 4482 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.005563 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-frr-sockets\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: E1125 06:58:18.005633 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c007ba51-6685-419c-ad1e-0832056671fc-cert podName:c007ba51-6685-419c-ad1e-0832056671fc nodeName:}" failed. No retries permitted until 2025-11-25 06:58:18.505602864 +0000 UTC m=+672.993834122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c007ba51-6685-419c-ad1e-0832056671fc-cert") pod "frr-k8s-webhook-server-6998585d5-qvnx7" (UID: "c007ba51-6685-419c-ad1e-0832056671fc") : secret "frr-k8s-webhook-server-cert" not found Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.005707 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-frr-startup\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.005729 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-metrics\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.013883 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-metrics-certs\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.021784 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vnsx\" (UniqueName: \"kubernetes.io/projected/c007ba51-6685-419c-ad1e-0832056671fc-kube-api-access-4vnsx\") pod \"frr-k8s-webhook-server-6998585d5-qvnx7\" (UID: \"c007ba51-6685-419c-ad1e-0832056671fc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.028606 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc4bn\" (UniqueName: \"kubernetes.io/projected/afe0e5ce-f60d-46ab-9655-4b65ae59d02f-kube-api-access-lc4bn\") pod \"frr-k8s-zq9tk\" (UID: \"afe0e5ce-f60d-46ab-9655-4b65ae59d02f\") " pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.083847 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.105505 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/060b17d5-e982-4320-869e-99ca2727296a-metrics-certs\") pod \"controller-6c7b4b5f48-hjx66\" (UID: \"060b17d5-e982-4320-869e-99ca2727296a\") " pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.105554 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flbf4\" (UniqueName: \"kubernetes.io/projected/daa60e38-de6c-4144-8ca3-1e35de41eb28-kube-api-access-flbf4\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.105578 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlx7g\" (UniqueName: \"kubernetes.io/projected/060b17d5-e982-4320-869e-99ca2727296a-kube-api-access-nlx7g\") pod \"controller-6c7b4b5f48-hjx66\" (UID: \"060b17d5-e982-4320-869e-99ca2727296a\") " pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.105659 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/060b17d5-e982-4320-869e-99ca2727296a-cert\") pod \"controller-6c7b4b5f48-hjx66\" (UID: \"060b17d5-e982-4320-869e-99ca2727296a\") " pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.105688 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/daa60e38-de6c-4144-8ca3-1e35de41eb28-metallb-excludel2\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.105716 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/daa60e38-de6c-4144-8ca3-1e35de41eb28-memberlist\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.105734 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daa60e38-de6c-4144-8ca3-1e35de41eb28-metrics-certs\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:18 crc kubenswrapper[4482]: E1125 06:58:18.106021 4482 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 06:58:18 crc kubenswrapper[4482]: E1125 06:58:18.106103 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daa60e38-de6c-4144-8ca3-1e35de41eb28-memberlist podName:daa60e38-de6c-4144-8ca3-1e35de41eb28 nodeName:}" failed. No retries permitted until 2025-11-25 06:58:18.606078057 +0000 UTC m=+673.094309316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/daa60e38-de6c-4144-8ca3-1e35de41eb28-memberlist") pod "speaker-gqfwd" (UID: "daa60e38-de6c-4144-8ca3-1e35de41eb28") : secret "metallb-memberlist" not found Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.107867 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/daa60e38-de6c-4144-8ca3-1e35de41eb28-metallb-excludel2\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.108798 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.110819 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/060b17d5-e982-4320-869e-99ca2727296a-metrics-certs\") pod \"controller-6c7b4b5f48-hjx66\" (UID: \"060b17d5-e982-4320-869e-99ca2727296a\") " pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.117373 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/daa60e38-de6c-4144-8ca3-1e35de41eb28-metrics-certs\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.123605 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/060b17d5-e982-4320-869e-99ca2727296a-cert\") pod \"controller-6c7b4b5f48-hjx66\" (UID: \"060b17d5-e982-4320-869e-99ca2727296a\") " pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.123959 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flbf4\" (UniqueName: \"kubernetes.io/projected/daa60e38-de6c-4144-8ca3-1e35de41eb28-kube-api-access-flbf4\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.124832 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlx7g\" (UniqueName: \"kubernetes.io/projected/060b17d5-e982-4320-869e-99ca2727296a-kube-api-access-nlx7g\") pod \"controller-6c7b4b5f48-hjx66\" (UID: \"060b17d5-e982-4320-869e-99ca2727296a\") " pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.191427 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.448608 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-hjx66"] Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.511366 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c007ba51-6685-419c-ad1e-0832056671fc-cert\") pod \"frr-k8s-webhook-server-6998585d5-qvnx7\" (UID: \"c007ba51-6685-419c-ad1e-0832056671fc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.517679 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c007ba51-6685-419c-ad1e-0832056671fc-cert\") pod \"frr-k8s-webhook-server-6998585d5-qvnx7\" (UID: \"c007ba51-6685-419c-ad1e-0832056671fc\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.612653 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/daa60e38-de6c-4144-8ca3-1e35de41eb28-memberlist\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:18 crc kubenswrapper[4482]: E1125 06:58:18.612837 4482 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 06:58:18 crc kubenswrapper[4482]: E1125 06:58:18.612883 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daa60e38-de6c-4144-8ca3-1e35de41eb28-memberlist podName:daa60e38-de6c-4144-8ca3-1e35de41eb28 nodeName:}" failed. No retries permitted until 2025-11-25 06:58:19.612869827 +0000 UTC m=+674.101101087 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/daa60e38-de6c-4144-8ca3-1e35de41eb28-memberlist") pod "speaker-gqfwd" (UID: "daa60e38-de6c-4144-8ca3-1e35de41eb28") : secret "metallb-memberlist" not found Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.673728 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" Nov 25 06:58:18 crc kubenswrapper[4482]: I1125 06:58:18.893770 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7"] Nov 25 06:58:18 crc kubenswrapper[4482]: W1125 06:58:18.898623 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc007ba51_6685_419c_ad1e_0832056671fc.slice/crio-4569ef498bfef5c595d156453b6bbd0bc71651e0fe6561765c7ebc7a3d487cb8 WatchSource:0}: Error finding container 4569ef498bfef5c595d156453b6bbd0bc71651e0fe6561765c7ebc7a3d487cb8: Status 404 returned error can't find the container with id 4569ef498bfef5c595d156453b6bbd0bc71651e0fe6561765c7ebc7a3d487cb8 Nov 25 06:58:19 crc kubenswrapper[4482]: I1125 06:58:19.127394 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-hjx66" event={"ID":"060b17d5-e982-4320-869e-99ca2727296a","Type":"ContainerStarted","Data":"464ce51f895aa4cc284b01bfe1e0d0c6cc50eaf683170bbc2f1604c8dab00ede"} Nov 25 06:58:19 crc kubenswrapper[4482]: I1125 06:58:19.127443 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-hjx66" event={"ID":"060b17d5-e982-4320-869e-99ca2727296a","Type":"ContainerStarted","Data":"9adaaef66cc6376d061b62d159776c87db4f483b31fb1f5e13d00c315ef31b5f"} Nov 25 06:58:19 crc kubenswrapper[4482]: I1125 06:58:19.127458 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-hjx66" event={"ID":"060b17d5-e982-4320-869e-99ca2727296a","Type":"ContainerStarted","Data":"baf3cba5fb9bfa9891fd8d49c8c8ede307a778ac85497da636f7e4f39e7d3195"} Nov 25 06:58:19 crc kubenswrapper[4482]: I1125 06:58:19.127575 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:19 crc kubenswrapper[4482]: I1125 06:58:19.128736 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" event={"ID":"c007ba51-6685-419c-ad1e-0832056671fc","Type":"ContainerStarted","Data":"4569ef498bfef5c595d156453b6bbd0bc71651e0fe6561765c7ebc7a3d487cb8"} Nov 25 06:58:19 crc kubenswrapper[4482]: I1125 06:58:19.130246 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zq9tk" event={"ID":"afe0e5ce-f60d-46ab-9655-4b65ae59d02f","Type":"ContainerStarted","Data":"933612f2e789071bc8c18bc51de96fcfce77b16abcf7c4c5aeca5cf0a92286b5"} Nov 25 06:58:19 crc kubenswrapper[4482]: I1125 06:58:19.141683 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-hjx66" podStartSLOduration=2.141665148 podStartE2EDuration="2.141665148s" podCreationTimestamp="2025-11-25 06:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:58:19.14028208 +0000 UTC m=+673.628513339" watchObservedRunningTime="2025-11-25 06:58:19.141665148 +0000 UTC m=+673.629896406" Nov 25 06:58:19 crc kubenswrapper[4482]: I1125 06:58:19.628036 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/daa60e38-de6c-4144-8ca3-1e35de41eb28-memberlist\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:19 crc kubenswrapper[4482]: I1125 06:58:19.640058 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/daa60e38-de6c-4144-8ca3-1e35de41eb28-memberlist\") pod \"speaker-gqfwd\" (UID: \"daa60e38-de6c-4144-8ca3-1e35de41eb28\") " pod="metallb-system/speaker-gqfwd" Nov 25 06:58:19 crc kubenswrapper[4482]: I1125 06:58:19.677493 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gqfwd" Nov 25 06:58:19 crc kubenswrapper[4482]: W1125 06:58:19.738571 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddaa60e38_de6c_4144_8ca3_1e35de41eb28.slice/crio-e9691802637b98e71d72d6a0191ab73e78ea71d05cafb82b7b38bdb3a20da714 WatchSource:0}: Error finding container e9691802637b98e71d72d6a0191ab73e78ea71d05cafb82b7b38bdb3a20da714: Status 404 returned error can't find the container with id e9691802637b98e71d72d6a0191ab73e78ea71d05cafb82b7b38bdb3a20da714 Nov 25 06:58:20 crc kubenswrapper[4482]: I1125 06:58:20.154347 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gqfwd" event={"ID":"daa60e38-de6c-4144-8ca3-1e35de41eb28","Type":"ContainerStarted","Data":"0b93694887436f47d9eb82462104f7a1519f4f9adc0685afe7329df62db07d88"} Nov 25 06:58:20 crc kubenswrapper[4482]: I1125 06:58:20.154609 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gqfwd" event={"ID":"daa60e38-de6c-4144-8ca3-1e35de41eb28","Type":"ContainerStarted","Data":"e9691802637b98e71d72d6a0191ab73e78ea71d05cafb82b7b38bdb3a20da714"} Nov 25 06:58:21 crc kubenswrapper[4482]: I1125 06:58:21.168430 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gqfwd" event={"ID":"daa60e38-de6c-4144-8ca3-1e35de41eb28","Type":"ContainerStarted","Data":"3291eb8df03028c16e2bc710d9b379ec7791fbf0bc73a5d44e74e20846d926f4"} Nov 25 06:58:21 crc kubenswrapper[4482]: I1125 06:58:21.168614 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gqfwd" Nov 25 06:58:21 crc kubenswrapper[4482]: I1125 06:58:21.191577 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gqfwd" podStartSLOduration=4.191556489 podStartE2EDuration="4.191556489s" podCreationTimestamp="2025-11-25 06:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:58:21.188196666 +0000 UTC m=+675.676427926" watchObservedRunningTime="2025-11-25 06:58:21.191556489 +0000 UTC m=+675.679787748" Nov 25 06:58:26 crc kubenswrapper[4482]: I1125 06:58:26.210154 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" event={"ID":"c007ba51-6685-419c-ad1e-0832056671fc","Type":"ContainerStarted","Data":"205d71f72ac3bda35838e01a46ded3093c819b4331cb3c2107bb3b9f8d2d7286"} Nov 25 06:58:26 crc kubenswrapper[4482]: I1125 06:58:26.212255 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" Nov 25 06:58:26 crc kubenswrapper[4482]: I1125 06:58:26.212410 4482 generic.go:334] "Generic (PLEG): container finished" podID="afe0e5ce-f60d-46ab-9655-4b65ae59d02f" containerID="b9f6a9e4775df47e74786a1eba768e8838d371e4ea8c10133a80798dfcaaa096" exitCode=0 Nov 25 06:58:26 crc kubenswrapper[4482]: I1125 06:58:26.212445 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zq9tk" event={"ID":"afe0e5ce-f60d-46ab-9655-4b65ae59d02f","Type":"ContainerDied","Data":"b9f6a9e4775df47e74786a1eba768e8838d371e4ea8c10133a80798dfcaaa096"} Nov 25 06:58:26 crc kubenswrapper[4482]: I1125 06:58:26.226857 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" podStartSLOduration=2.635793167 podStartE2EDuration="9.226842599s" podCreationTimestamp="2025-11-25 06:58:17 +0000 UTC" firstStartedPulling="2025-11-25 06:58:18.900848249 +0000 UTC m=+673.389079508" lastFinishedPulling="2025-11-25 06:58:25.491897681 +0000 UTC m=+679.980128940" observedRunningTime="2025-11-25 06:58:26.225545614 +0000 UTC m=+680.713776874" watchObservedRunningTime="2025-11-25 06:58:26.226842599 +0000 UTC m=+680.715073848" Nov 25 06:58:27 crc kubenswrapper[4482]: I1125 06:58:27.219956 4482 generic.go:334] "Generic (PLEG): container finished" podID="afe0e5ce-f60d-46ab-9655-4b65ae59d02f" containerID="5cc178b5985f7a1a8efbb08cdaeeb89d63b792855455a2babdfa7fd9f9e94488" exitCode=0 Nov 25 06:58:27 crc kubenswrapper[4482]: I1125 06:58:27.219998 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zq9tk" event={"ID":"afe0e5ce-f60d-46ab-9655-4b65ae59d02f","Type":"ContainerDied","Data":"5cc178b5985f7a1a8efbb08cdaeeb89d63b792855455a2babdfa7fd9f9e94488"} Nov 25 06:58:28 crc kubenswrapper[4482]: I1125 06:58:28.195817 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-hjx66" Nov 25 06:58:28 crc kubenswrapper[4482]: I1125 06:58:28.226897 4482 generic.go:334] "Generic (PLEG): container finished" podID="afe0e5ce-f60d-46ab-9655-4b65ae59d02f" containerID="b629446ff70935d5e2cd69eabb2572629f9cbbdbe20e6a92cface345e5858684" exitCode=0 Nov 25 06:58:28 crc kubenswrapper[4482]: I1125 06:58:28.226932 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zq9tk" event={"ID":"afe0e5ce-f60d-46ab-9655-4b65ae59d02f","Type":"ContainerDied","Data":"b629446ff70935d5e2cd69eabb2572629f9cbbdbe20e6a92cface345e5858684"} Nov 25 06:58:29 crc kubenswrapper[4482]: I1125 06:58:29.240701 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zq9tk" event={"ID":"afe0e5ce-f60d-46ab-9655-4b65ae59d02f","Type":"ContainerStarted","Data":"12e61b57b285676733ec9ea9d4e3f5894119342c80d0ff5b956030bee9881508"} Nov 25 06:58:29 crc kubenswrapper[4482]: I1125 06:58:29.241156 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:29 crc kubenswrapper[4482]: I1125 06:58:29.241199 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zq9tk" event={"ID":"afe0e5ce-f60d-46ab-9655-4b65ae59d02f","Type":"ContainerStarted","Data":"23d5e404830c69a8736838319d0dadce99ed17dea08926204c279870249a7c96"} Nov 25 06:58:29 crc kubenswrapper[4482]: I1125 06:58:29.241217 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zq9tk" event={"ID":"afe0e5ce-f60d-46ab-9655-4b65ae59d02f","Type":"ContainerStarted","Data":"8e1a2064b29d0771472f9abc67bf9fd5b936029fd8ef905dd0616e00318dc3f4"} Nov 25 06:58:29 crc kubenswrapper[4482]: I1125 06:58:29.241242 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zq9tk" event={"ID":"afe0e5ce-f60d-46ab-9655-4b65ae59d02f","Type":"ContainerStarted","Data":"a206a957a67d69d70d527d962042a8838d574453a1844b2db6eca27ad54db2ab"} Nov 25 06:58:29 crc kubenswrapper[4482]: I1125 06:58:29.241251 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zq9tk" event={"ID":"afe0e5ce-f60d-46ab-9655-4b65ae59d02f","Type":"ContainerStarted","Data":"532256e41e2dcf73a40f702a472f792e3890593da92c570f440d9b0b80966a49"} Nov 25 06:58:29 crc kubenswrapper[4482]: I1125 06:58:29.241285 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zq9tk" event={"ID":"afe0e5ce-f60d-46ab-9655-4b65ae59d02f","Type":"ContainerStarted","Data":"a948774f3dd337b670727de32370b4c4368563c7bd7a2b82fb69f86682e1e639"} Nov 25 06:58:29 crc kubenswrapper[4482]: I1125 06:58:29.261506 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zq9tk" podStartSLOduration=5.10454388 podStartE2EDuration="12.261479371s" podCreationTimestamp="2025-11-25 06:58:17 +0000 UTC" firstStartedPulling="2025-11-25 06:58:18.328727548 +0000 UTC m=+672.816958797" lastFinishedPulling="2025-11-25 06:58:25.485663039 +0000 UTC m=+679.973894288" observedRunningTime="2025-11-25 06:58:29.258577421 +0000 UTC m=+683.746808670" watchObservedRunningTime="2025-11-25 06:58:29.261479371 +0000 UTC m=+683.749710630" Nov 25 06:58:33 crc kubenswrapper[4482]: I1125 06:58:33.084526 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:33 crc kubenswrapper[4482]: I1125 06:58:33.114112 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:38 crc kubenswrapper[4482]: I1125 06:58:38.088346 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zq9tk" Nov 25 06:58:38 crc kubenswrapper[4482]: I1125 06:58:38.679830 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qvnx7" Nov 25 06:58:39 crc kubenswrapper[4482]: I1125 06:58:39.117725 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 06:58:39 crc kubenswrapper[4482]: I1125 06:58:39.118032 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 06:58:39 crc kubenswrapper[4482]: I1125 06:58:39.682202 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gqfwd" Nov 25 06:58:42 crc kubenswrapper[4482]: I1125 06:58:41.999880 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vgzg8"] Nov 25 06:58:42 crc kubenswrapper[4482]: I1125 06:58:42.001087 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vgzg8" Nov 25 06:58:42 crc kubenswrapper[4482]: I1125 06:58:42.003242 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 06:58:42 crc kubenswrapper[4482]: I1125 06:58:42.004438 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-n574j" Nov 25 06:58:42 crc kubenswrapper[4482]: I1125 06:58:42.004470 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 06:58:42 crc kubenswrapper[4482]: I1125 06:58:42.058780 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vgzg8"] Nov 25 06:58:42 crc kubenswrapper[4482]: I1125 06:58:42.158084 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frn9g\" (UniqueName: \"kubernetes.io/projected/f67399b0-2042-4083-8231-f12c29ca5e17-kube-api-access-frn9g\") pod \"openstack-operator-index-vgzg8\" (UID: \"f67399b0-2042-4083-8231-f12c29ca5e17\") " pod="openstack-operators/openstack-operator-index-vgzg8" Nov 25 06:58:42 crc kubenswrapper[4482]: I1125 06:58:42.259583 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frn9g\" (UniqueName: \"kubernetes.io/projected/f67399b0-2042-4083-8231-f12c29ca5e17-kube-api-access-frn9g\") pod \"openstack-operator-index-vgzg8\" (UID: \"f67399b0-2042-4083-8231-f12c29ca5e17\") " pod="openstack-operators/openstack-operator-index-vgzg8" Nov 25 06:58:42 crc kubenswrapper[4482]: I1125 06:58:42.278461 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frn9g\" (UniqueName: \"kubernetes.io/projected/f67399b0-2042-4083-8231-f12c29ca5e17-kube-api-access-frn9g\") pod \"openstack-operator-index-vgzg8\" (UID: \"f67399b0-2042-4083-8231-f12c29ca5e17\") " pod="openstack-operators/openstack-operator-index-vgzg8" Nov 25 06:58:42 crc kubenswrapper[4482]: I1125 06:58:42.345916 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vgzg8" Nov 25 06:58:42 crc kubenswrapper[4482]: I1125 06:58:42.703805 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vgzg8"] Nov 25 06:58:43 crc kubenswrapper[4482]: I1125 06:58:43.334997 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vgzg8" event={"ID":"f67399b0-2042-4083-8231-f12c29ca5e17","Type":"ContainerStarted","Data":"cfe530dd1f0b2456754743fc64ac16782640ec7de318612062fab00ed299661b"} Nov 25 06:58:44 crc kubenswrapper[4482]: I1125 06:58:44.345371 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vgzg8" event={"ID":"f67399b0-2042-4083-8231-f12c29ca5e17","Type":"ContainerStarted","Data":"5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf"} Nov 25 06:58:45 crc kubenswrapper[4482]: I1125 06:58:45.359196 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vgzg8" podStartSLOduration=3.376820433 podStartE2EDuration="4.359183001s" podCreationTimestamp="2025-11-25 06:58:41 +0000 UTC" firstStartedPulling="2025-11-25 06:58:42.712505963 +0000 UTC m=+697.200737222" lastFinishedPulling="2025-11-25 06:58:43.694868531 +0000 UTC m=+698.183099790" observedRunningTime="2025-11-25 06:58:44.361556117 +0000 UTC m=+698.849787377" watchObservedRunningTime="2025-11-25 06:58:45.359183001 +0000 UTC m=+699.847414260" Nov 25 06:58:45 crc kubenswrapper[4482]: I1125 06:58:45.362150 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vgzg8"] Nov 25 06:58:45 crc kubenswrapper[4482]: I1125 06:58:45.991532 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5xswz"] Nov 25 06:58:45 crc kubenswrapper[4482]: I1125 06:58:45.994097 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5xswz" Nov 25 06:58:45 crc kubenswrapper[4482]: I1125 06:58:45.995038 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5xswz"] Nov 25 06:58:46 crc kubenswrapper[4482]: I1125 06:58:46.110263 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2s5z\" (UniqueName: \"kubernetes.io/projected/c74dc450-4e3e-4e0e-95ca-d54aa9f5b12d-kube-api-access-w2s5z\") pod \"openstack-operator-index-5xswz\" (UID: \"c74dc450-4e3e-4e0e-95ca-d54aa9f5b12d\") " pod="openstack-operators/openstack-operator-index-5xswz" Nov 25 06:58:46 crc kubenswrapper[4482]: I1125 06:58:46.212695 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2s5z\" (UniqueName: \"kubernetes.io/projected/c74dc450-4e3e-4e0e-95ca-d54aa9f5b12d-kube-api-access-w2s5z\") pod \"openstack-operator-index-5xswz\" (UID: \"c74dc450-4e3e-4e0e-95ca-d54aa9f5b12d\") " pod="openstack-operators/openstack-operator-index-5xswz" Nov 25 06:58:46 crc kubenswrapper[4482]: I1125 06:58:46.231483 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2s5z\" (UniqueName: \"kubernetes.io/projected/c74dc450-4e3e-4e0e-95ca-d54aa9f5b12d-kube-api-access-w2s5z\") pod \"openstack-operator-index-5xswz\" (UID: \"c74dc450-4e3e-4e0e-95ca-d54aa9f5b12d\") " pod="openstack-operators/openstack-operator-index-5xswz" Nov 25 06:58:46 crc kubenswrapper[4482]: I1125 06:58:46.309157 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5xswz" Nov 25 06:58:46 crc kubenswrapper[4482]: I1125 06:58:46.361052 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vgzg8" podUID="f67399b0-2042-4083-8231-f12c29ca5e17" containerName="registry-server" containerID="cri-o://5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf" gracePeriod=2 Nov 25 06:58:46 crc kubenswrapper[4482]: I1125 06:58:46.675133 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vgzg8" Nov 25 06:58:46 crc kubenswrapper[4482]: I1125 06:58:46.687784 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5xswz"] Nov 25 06:58:46 crc kubenswrapper[4482]: W1125 06:58:46.700571 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc74dc450_4e3e_4e0e_95ca_d54aa9f5b12d.slice/crio-09de4462dd1ef9f81d2604084b57d84b1482d5f93bbd260556a213556997b980 WatchSource:0}: Error finding container 09de4462dd1ef9f81d2604084b57d84b1482d5f93bbd260556a213556997b980: Status 404 returned error can't find the container with id 09de4462dd1ef9f81d2604084b57d84b1482d5f93bbd260556a213556997b980 Nov 25 06:58:46 crc kubenswrapper[4482]: I1125 06:58:46.819496 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frn9g\" (UniqueName: \"kubernetes.io/projected/f67399b0-2042-4083-8231-f12c29ca5e17-kube-api-access-frn9g\") pod \"f67399b0-2042-4083-8231-f12c29ca5e17\" (UID: \"f67399b0-2042-4083-8231-f12c29ca5e17\") " Nov 25 06:58:46 crc kubenswrapper[4482]: I1125 06:58:46.825276 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67399b0-2042-4083-8231-f12c29ca5e17-kube-api-access-frn9g" (OuterVolumeSpecName: "kube-api-access-frn9g") pod "f67399b0-2042-4083-8231-f12c29ca5e17" (UID: "f67399b0-2042-4083-8231-f12c29ca5e17"). InnerVolumeSpecName "kube-api-access-frn9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:58:46 crc kubenswrapper[4482]: I1125 06:58:46.921193 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frn9g\" (UniqueName: \"kubernetes.io/projected/f67399b0-2042-4083-8231-f12c29ca5e17-kube-api-access-frn9g\") on node \"crc\" DevicePath \"\"" Nov 25 06:58:47 crc kubenswrapper[4482]: I1125 06:58:47.370485 4482 generic.go:334] "Generic (PLEG): container finished" podID="f67399b0-2042-4083-8231-f12c29ca5e17" containerID="5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf" exitCode=0 Nov 25 06:58:47 crc kubenswrapper[4482]: I1125 06:58:47.370557 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vgzg8" Nov 25 06:58:47 crc kubenswrapper[4482]: I1125 06:58:47.370595 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vgzg8" event={"ID":"f67399b0-2042-4083-8231-f12c29ca5e17","Type":"ContainerDied","Data":"5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf"} Nov 25 06:58:47 crc kubenswrapper[4482]: I1125 06:58:47.370646 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vgzg8" event={"ID":"f67399b0-2042-4083-8231-f12c29ca5e17","Type":"ContainerDied","Data":"cfe530dd1f0b2456754743fc64ac16782640ec7de318612062fab00ed299661b"} Nov 25 06:58:47 crc kubenswrapper[4482]: I1125 06:58:47.370672 4482 scope.go:117] "RemoveContainer" containerID="5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf" Nov 25 06:58:47 crc kubenswrapper[4482]: I1125 06:58:47.372496 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5xswz" event={"ID":"c74dc450-4e3e-4e0e-95ca-d54aa9f5b12d","Type":"ContainerStarted","Data":"09de4462dd1ef9f81d2604084b57d84b1482d5f93bbd260556a213556997b980"} Nov 25 06:58:47 crc kubenswrapper[4482]: I1125 06:58:47.401250 4482 scope.go:117] "RemoveContainer" containerID="5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf" Nov 25 06:58:47 crc kubenswrapper[4482]: E1125 06:58:47.401694 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf\": container with ID starting with 5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf not found: ID does not exist" containerID="5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf" Nov 25 06:58:47 crc kubenswrapper[4482]: I1125 06:58:47.401729 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf"} err="failed to get container status \"5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf\": rpc error: code = NotFound desc = could not find container \"5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf\": container with ID starting with 5e12b8a9ab7d730d483e59cf153cac5b730288fb046d391c3cc325049d14ffcf not found: ID does not exist" Nov 25 06:58:47 crc kubenswrapper[4482]: I1125 06:58:47.429029 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vgzg8"] Nov 25 06:58:47 crc kubenswrapper[4482]: I1125 06:58:47.439121 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vgzg8"] Nov 25 06:58:47 crc kubenswrapper[4482]: I1125 06:58:47.837080 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f67399b0-2042-4083-8231-f12c29ca5e17" path="/var/lib/kubelet/pods/f67399b0-2042-4083-8231-f12c29ca5e17/volumes" Nov 25 06:58:48 crc kubenswrapper[4482]: I1125 06:58:48.380572 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5xswz" event={"ID":"c74dc450-4e3e-4e0e-95ca-d54aa9f5b12d","Type":"ContainerStarted","Data":"faea0bb2b99d39698e1d21f079a7527d3412104b99d026625f53b3b646ccbb89"} Nov 25 06:58:48 crc kubenswrapper[4482]: I1125 06:58:48.395915 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5xswz" podStartSLOduration=2.75469828 podStartE2EDuration="3.395887591s" podCreationTimestamp="2025-11-25 06:58:45 +0000 UTC" firstStartedPulling="2025-11-25 06:58:46.703669931 +0000 UTC m=+701.191901190" lastFinishedPulling="2025-11-25 06:58:47.344859242 +0000 UTC m=+701.833090501" observedRunningTime="2025-11-25 06:58:48.394461963 +0000 UTC m=+702.882693222" watchObservedRunningTime="2025-11-25 06:58:48.395887591 +0000 UTC m=+702.884118839" Nov 25 06:58:56 crc kubenswrapper[4482]: I1125 06:58:56.310137 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5xswz" Nov 25 06:58:56 crc kubenswrapper[4482]: I1125 06:58:56.310731 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5xswz" Nov 25 06:58:56 crc kubenswrapper[4482]: I1125 06:58:56.335289 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5xswz" Nov 25 06:58:56 crc kubenswrapper[4482]: I1125 06:58:56.434724 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5xswz" Nov 25 06:58:58 crc kubenswrapper[4482]: I1125 06:58:58.999184 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6"] Nov 25 06:58:59 crc kubenswrapper[4482]: E1125 06:58:59.000407 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67399b0-2042-4083-8231-f12c29ca5e17" containerName="registry-server" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.000438 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67399b0-2042-4083-8231-f12c29ca5e17" containerName="registry-server" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.000564 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67399b0-2042-4083-8231-f12c29ca5e17" containerName="registry-server" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.001361 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.004134 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7mrj8" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.009952 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6"] Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.054398 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4a41def6-86e5-4cf1-8749-c68716b6d3bf-util\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6\" (UID: \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.054450 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwb2t\" (UniqueName: \"kubernetes.io/projected/4a41def6-86e5-4cf1-8749-c68716b6d3bf-kube-api-access-lwb2t\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6\" (UID: \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.054562 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4a41def6-86e5-4cf1-8749-c68716b6d3bf-bundle\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6\" (UID: \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.156468 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4a41def6-86e5-4cf1-8749-c68716b6d3bf-util\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6\" (UID: \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.156543 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwb2t\" (UniqueName: \"kubernetes.io/projected/4a41def6-86e5-4cf1-8749-c68716b6d3bf-kube-api-access-lwb2t\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6\" (UID: \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.156600 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4a41def6-86e5-4cf1-8749-c68716b6d3bf-bundle\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6\" (UID: \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.157060 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4a41def6-86e5-4cf1-8749-c68716b6d3bf-util\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6\" (UID: \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.157095 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4a41def6-86e5-4cf1-8749-c68716b6d3bf-bundle\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6\" (UID: \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.174540 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwb2t\" (UniqueName: \"kubernetes.io/projected/4a41def6-86e5-4cf1-8749-c68716b6d3bf-kube-api-access-lwb2t\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6\" (UID: \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.314946 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:58:59 crc kubenswrapper[4482]: I1125 06:58:59.694495 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6"] Nov 25 06:59:00 crc kubenswrapper[4482]: I1125 06:59:00.445430 4482 generic.go:334] "Generic (PLEG): container finished" podID="4a41def6-86e5-4cf1-8749-c68716b6d3bf" containerID="dcdbba16e3aec3b662cc8d2ac9d5d18ccc9025cd15d0c42d81e29b9ac5c82e84" exitCode=0 Nov 25 06:59:00 crc kubenswrapper[4482]: I1125 06:59:00.445533 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" event={"ID":"4a41def6-86e5-4cf1-8749-c68716b6d3bf","Type":"ContainerDied","Data":"dcdbba16e3aec3b662cc8d2ac9d5d18ccc9025cd15d0c42d81e29b9ac5c82e84"} Nov 25 06:59:00 crc kubenswrapper[4482]: I1125 06:59:00.445747 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" event={"ID":"4a41def6-86e5-4cf1-8749-c68716b6d3bf","Type":"ContainerStarted","Data":"aa67261288ae6bff21f3e1120544331fd8984c859a4bf114644b7df1448e98e1"} Nov 25 06:59:01 crc kubenswrapper[4482]: I1125 06:59:01.457090 4482 generic.go:334] "Generic (PLEG): container finished" podID="4a41def6-86e5-4cf1-8749-c68716b6d3bf" containerID="fee95e12addb610c4d9668203c19215c1f8558439c2487485e48dc5942fa0fd2" exitCode=0 Nov 25 06:59:01 crc kubenswrapper[4482]: I1125 06:59:01.457218 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" event={"ID":"4a41def6-86e5-4cf1-8749-c68716b6d3bf","Type":"ContainerDied","Data":"fee95e12addb610c4d9668203c19215c1f8558439c2487485e48dc5942fa0fd2"} Nov 25 06:59:02 crc kubenswrapper[4482]: I1125 06:59:02.466563 4482 generic.go:334] "Generic (PLEG): container finished" podID="4a41def6-86e5-4cf1-8749-c68716b6d3bf" containerID="21f3c9ff7b991606c25927f4a2866ae8901f37af43596684db070dc6910dc981" exitCode=0 Nov 25 06:59:02 crc kubenswrapper[4482]: I1125 06:59:02.466625 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" event={"ID":"4a41def6-86e5-4cf1-8749-c68716b6d3bf","Type":"ContainerDied","Data":"21f3c9ff7b991606c25927f4a2866ae8901f37af43596684db070dc6910dc981"} Nov 25 06:59:03 crc kubenswrapper[4482]: I1125 06:59:03.669443 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:59:03 crc kubenswrapper[4482]: I1125 06:59:03.819328 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4a41def6-86e5-4cf1-8749-c68716b6d3bf-bundle\") pod \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\" (UID: \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\") " Nov 25 06:59:03 crc kubenswrapper[4482]: I1125 06:59:03.819624 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4a41def6-86e5-4cf1-8749-c68716b6d3bf-util\") pod \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\" (UID: \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\") " Nov 25 06:59:03 crc kubenswrapper[4482]: I1125 06:59:03.819750 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwb2t\" (UniqueName: \"kubernetes.io/projected/4a41def6-86e5-4cf1-8749-c68716b6d3bf-kube-api-access-lwb2t\") pod \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\" (UID: \"4a41def6-86e5-4cf1-8749-c68716b6d3bf\") " Nov 25 06:59:03 crc kubenswrapper[4482]: I1125 06:59:03.820066 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a41def6-86e5-4cf1-8749-c68716b6d3bf-bundle" (OuterVolumeSpecName: "bundle") pod "4a41def6-86e5-4cf1-8749-c68716b6d3bf" (UID: "4a41def6-86e5-4cf1-8749-c68716b6d3bf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:59:03 crc kubenswrapper[4482]: I1125 06:59:03.825932 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a41def6-86e5-4cf1-8749-c68716b6d3bf-kube-api-access-lwb2t" (OuterVolumeSpecName: "kube-api-access-lwb2t") pod "4a41def6-86e5-4cf1-8749-c68716b6d3bf" (UID: "4a41def6-86e5-4cf1-8749-c68716b6d3bf"). InnerVolumeSpecName "kube-api-access-lwb2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:59:03 crc kubenswrapper[4482]: I1125 06:59:03.831483 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a41def6-86e5-4cf1-8749-c68716b6d3bf-util" (OuterVolumeSpecName: "util") pod "4a41def6-86e5-4cf1-8749-c68716b6d3bf" (UID: "4a41def6-86e5-4cf1-8749-c68716b6d3bf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 06:59:03 crc kubenswrapper[4482]: I1125 06:59:03.921504 4482 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4a41def6-86e5-4cf1-8749-c68716b6d3bf-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 06:59:03 crc kubenswrapper[4482]: I1125 06:59:03.921534 4482 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4a41def6-86e5-4cf1-8749-c68716b6d3bf-util\") on node \"crc\" DevicePath \"\"" Nov 25 06:59:03 crc kubenswrapper[4482]: I1125 06:59:03.921546 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwb2t\" (UniqueName: \"kubernetes.io/projected/4a41def6-86e5-4cf1-8749-c68716b6d3bf-kube-api-access-lwb2t\") on node \"crc\" DevicePath \"\"" Nov 25 06:59:04 crc kubenswrapper[4482]: I1125 06:59:04.482797 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" event={"ID":"4a41def6-86e5-4cf1-8749-c68716b6d3bf","Type":"ContainerDied","Data":"aa67261288ae6bff21f3e1120544331fd8984c859a4bf114644b7df1448e98e1"} Nov 25 06:59:04 crc kubenswrapper[4482]: I1125 06:59:04.482840 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa67261288ae6bff21f3e1120544331fd8984c859a4bf114644b7df1448e98e1" Nov 25 06:59:04 crc kubenswrapper[4482]: I1125 06:59:04.483130 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15fvgfj6" Nov 25 06:59:09 crc kubenswrapper[4482]: I1125 06:59:09.117502 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 06:59:09 crc kubenswrapper[4482]: I1125 06:59:09.117833 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.022308 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt"] Nov 25 06:59:11 crc kubenswrapper[4482]: E1125 06:59:11.022677 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a41def6-86e5-4cf1-8749-c68716b6d3bf" containerName="pull" Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.022698 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a41def6-86e5-4cf1-8749-c68716b6d3bf" containerName="pull" Nov 25 06:59:11 crc kubenswrapper[4482]: E1125 06:59:11.022712 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a41def6-86e5-4cf1-8749-c68716b6d3bf" containerName="extract" Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.022717 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a41def6-86e5-4cf1-8749-c68716b6d3bf" containerName="extract" Nov 25 06:59:11 crc kubenswrapper[4482]: E1125 06:59:11.022736 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a41def6-86e5-4cf1-8749-c68716b6d3bf" containerName="util" Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.022742 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a41def6-86e5-4cf1-8749-c68716b6d3bf" containerName="util" Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.022872 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a41def6-86e5-4cf1-8749-c68716b6d3bf" containerName="extract" Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.023402 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.025516 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-xthjx" Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.044347 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt"] Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.216947 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsb52\" (UniqueName: \"kubernetes.io/projected/a824a0e7-eb0a-4a5c-aafd-d01b622d6141-kube-api-access-dsb52\") pod \"openstack-operator-controller-operator-7b567956b5-hv5nt\" (UID: \"a824a0e7-eb0a-4a5c-aafd-d01b622d6141\") " pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.318635 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsb52\" (UniqueName: \"kubernetes.io/projected/a824a0e7-eb0a-4a5c-aafd-d01b622d6141-kube-api-access-dsb52\") pod \"openstack-operator-controller-operator-7b567956b5-hv5nt\" (UID: \"a824a0e7-eb0a-4a5c-aafd-d01b622d6141\") " pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.340129 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsb52\" (UniqueName: \"kubernetes.io/projected/a824a0e7-eb0a-4a5c-aafd-d01b622d6141-kube-api-access-dsb52\") pod \"openstack-operator-controller-operator-7b567956b5-hv5nt\" (UID: \"a824a0e7-eb0a-4a5c-aafd-d01b622d6141\") " pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.638267 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" Nov 25 06:59:11 crc kubenswrapper[4482]: I1125 06:59:11.850133 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt"] Nov 25 06:59:12 crc kubenswrapper[4482]: I1125 06:59:12.531481 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" event={"ID":"a824a0e7-eb0a-4a5c-aafd-d01b622d6141","Type":"ContainerStarted","Data":"83628d815fa0235a328384ddc657270c8be02ca79eb5783c83ac7f3f2c9c9bfd"} Nov 25 06:59:16 crc kubenswrapper[4482]: I1125 06:59:16.566234 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" event={"ID":"a824a0e7-eb0a-4a5c-aafd-d01b622d6141","Type":"ContainerStarted","Data":"4e101f6c433b2c16cddd678061e9c1a85ce78919aa1a41e94e14d0a0bb311358"} Nov 25 06:59:16 crc kubenswrapper[4482]: I1125 06:59:16.566859 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" Nov 25 06:59:16 crc kubenswrapper[4482]: I1125 06:59:16.595551 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" podStartSLOduration=1.598970934 podStartE2EDuration="5.59553476s" podCreationTimestamp="2025-11-25 06:59:11 +0000 UTC" firstStartedPulling="2025-11-25 06:59:11.8587425 +0000 UTC m=+726.346973759" lastFinishedPulling="2025-11-25 06:59:15.855306326 +0000 UTC m=+730.343537585" observedRunningTime="2025-11-25 06:59:16.592513657 +0000 UTC m=+731.080744916" watchObservedRunningTime="2025-11-25 06:59:16.59553476 +0000 UTC m=+731.083766019" Nov 25 06:59:21 crc kubenswrapper[4482]: I1125 06:59:21.643266 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.040715 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-shnd8"] Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.041764 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" podUID="ef330858-933c-41ce-b34b-db48cd8e8200" containerName="controller-manager" containerID="cri-o://2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5" gracePeriod=30 Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.118577 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w"] Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.118765 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" podUID="587f32ef-b1da-4e40-a1bc-33ba39c207e8" containerName="route-controller-manager" containerID="cri-o://a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6" gracePeriod=30 Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.581758 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.586084 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.622647 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-client-ca\") pod \"ef330858-933c-41ce-b34b-db48cd8e8200\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.622720 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-proxy-ca-bundles\") pod \"ef330858-933c-41ce-b34b-db48cd8e8200\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.622772 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef330858-933c-41ce-b34b-db48cd8e8200-serving-cert\") pod \"ef330858-933c-41ce-b34b-db48cd8e8200\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.623770 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef330858-933c-41ce-b34b-db48cd8e8200" (UID: "ef330858-933c-41ce-b34b-db48cd8e8200"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.623816 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ef330858-933c-41ce-b34b-db48cd8e8200" (UID: "ef330858-933c-41ce-b34b-db48cd8e8200"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.625563 4482 generic.go:334] "Generic (PLEG): container finished" podID="587f32ef-b1da-4e40-a1bc-33ba39c207e8" containerID="a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6" exitCode=0 Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.625651 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" event={"ID":"587f32ef-b1da-4e40-a1bc-33ba39c207e8","Type":"ContainerDied","Data":"a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6"} Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.625680 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" event={"ID":"587f32ef-b1da-4e40-a1bc-33ba39c207e8","Type":"ContainerDied","Data":"8d5c3f2b70beeae3d0a6c71c01ba202855c7c51a913cf8c882b07082b3fed232"} Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.625715 4482 scope.go:117] "RemoveContainer" containerID="a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.625857 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.633703 4482 generic.go:334] "Generic (PLEG): container finished" podID="ef330858-933c-41ce-b34b-db48cd8e8200" containerID="2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5" exitCode=0 Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.633757 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" event={"ID":"ef330858-933c-41ce-b34b-db48cd8e8200","Type":"ContainerDied","Data":"2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5"} Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.633792 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" event={"ID":"ef330858-933c-41ce-b34b-db48cd8e8200","Type":"ContainerDied","Data":"68fc0f6532f89f714bb991e6ed1776353378a61abccb85059b50ccfaf7b1e20b"} Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.633851 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-shnd8" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.634022 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef330858-933c-41ce-b34b-db48cd8e8200-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef330858-933c-41ce-b34b-db48cd8e8200" (UID: "ef330858-933c-41ce-b34b-db48cd8e8200"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.642427 4482 scope.go:117] "RemoveContainer" containerID="a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6" Nov 25 06:59:25 crc kubenswrapper[4482]: E1125 06:59:25.642760 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6\": container with ID starting with a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6 not found: ID does not exist" containerID="a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.642804 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6"} err="failed to get container status \"a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6\": rpc error: code = NotFound desc = could not find container \"a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6\": container with ID starting with a058f6c0e6389a23d1ceb171c0925794b88369ff80095ba01042413a4a01a7f6 not found: ID does not exist" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.642830 4482 scope.go:117] "RemoveContainer" containerID="2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.658260 4482 scope.go:117] "RemoveContainer" containerID="2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5" Nov 25 06:59:25 crc kubenswrapper[4482]: E1125 06:59:25.659595 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5\": container with ID starting with 2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5 not found: ID does not exist" containerID="2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.659645 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5"} err="failed to get container status \"2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5\": rpc error: code = NotFound desc = could not find container \"2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5\": container with ID starting with 2234c6f0436609d3eba4a8106c8c05843e0485276f308868a644297d1d0da6f5 not found: ID does not exist" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.724105 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/587f32ef-b1da-4e40-a1bc-33ba39c207e8-config\") pod \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.724147 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/587f32ef-b1da-4e40-a1bc-33ba39c207e8-client-ca\") pod \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.724198 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-config\") pod \"ef330858-933c-41ce-b34b-db48cd8e8200\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.724307 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj9xf\" (UniqueName: \"kubernetes.io/projected/ef330858-933c-41ce-b34b-db48cd8e8200-kube-api-access-bj9xf\") pod \"ef330858-933c-41ce-b34b-db48cd8e8200\" (UID: \"ef330858-933c-41ce-b34b-db48cd8e8200\") " Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.724350 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/587f32ef-b1da-4e40-a1bc-33ba39c207e8-serving-cert\") pod \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.724378 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x26tg\" (UniqueName: \"kubernetes.io/projected/587f32ef-b1da-4e40-a1bc-33ba39c207e8-kube-api-access-x26tg\") pod \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\" (UID: \"587f32ef-b1da-4e40-a1bc-33ba39c207e8\") " Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.724943 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-config" (OuterVolumeSpecName: "config") pod "ef330858-933c-41ce-b34b-db48cd8e8200" (UID: "ef330858-933c-41ce-b34b-db48cd8e8200"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.725007 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/587f32ef-b1da-4e40-a1bc-33ba39c207e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "587f32ef-b1da-4e40-a1bc-33ba39c207e8" (UID: "587f32ef-b1da-4e40-a1bc-33ba39c207e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.725150 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/587f32ef-b1da-4e40-a1bc-33ba39c207e8-config" (OuterVolumeSpecName: "config") pod "587f32ef-b1da-4e40-a1bc-33ba39c207e8" (UID: "587f32ef-b1da-4e40-a1bc-33ba39c207e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.725598 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/587f32ef-b1da-4e40-a1bc-33ba39c207e8-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.725646 4482 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/587f32ef-b1da-4e40-a1bc-33ba39c207e8-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.725661 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-config\") on node \"crc\" DevicePath \"\"" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.725672 4482 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.725684 4482 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef330858-933c-41ce-b34b-db48cd8e8200-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.725698 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef330858-933c-41ce-b34b-db48cd8e8200-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.727285 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/587f32ef-b1da-4e40-a1bc-33ba39c207e8-kube-api-access-x26tg" (OuterVolumeSpecName: "kube-api-access-x26tg") pod "587f32ef-b1da-4e40-a1bc-33ba39c207e8" (UID: "587f32ef-b1da-4e40-a1bc-33ba39c207e8"). InnerVolumeSpecName "kube-api-access-x26tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.728361 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587f32ef-b1da-4e40-a1bc-33ba39c207e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "587f32ef-b1da-4e40-a1bc-33ba39c207e8" (UID: "587f32ef-b1da-4e40-a1bc-33ba39c207e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.728913 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef330858-933c-41ce-b34b-db48cd8e8200-kube-api-access-bj9xf" (OuterVolumeSpecName: "kube-api-access-bj9xf") pod "ef330858-933c-41ce-b34b-db48cd8e8200" (UID: "ef330858-933c-41ce-b34b-db48cd8e8200"). InnerVolumeSpecName "kube-api-access-bj9xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.826876 4482 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/587f32ef-b1da-4e40-a1bc-33ba39c207e8-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.826907 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x26tg\" (UniqueName: \"kubernetes.io/projected/587f32ef-b1da-4e40-a1bc-33ba39c207e8-kube-api-access-x26tg\") on node \"crc\" DevicePath \"\"" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.826918 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj9xf\" (UniqueName: \"kubernetes.io/projected/ef330858-933c-41ce-b34b-db48cd8e8200-kube-api-access-bj9xf\") on node \"crc\" DevicePath \"\"" Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.943824 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w"] Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.945964 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qbn2w"] Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.955573 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-shnd8"] Nov 25 06:59:25 crc kubenswrapper[4482]: I1125 06:59:25.958860 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-shnd8"] Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.763279 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v"] Nov 25 06:59:26 crc kubenswrapper[4482]: E1125 06:59:26.763864 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef330858-933c-41ce-b34b-db48cd8e8200" containerName="controller-manager" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.763880 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef330858-933c-41ce-b34b-db48cd8e8200" containerName="controller-manager" Nov 25 06:59:26 crc kubenswrapper[4482]: E1125 06:59:26.763901 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587f32ef-b1da-4e40-a1bc-33ba39c207e8" containerName="route-controller-manager" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.763907 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="587f32ef-b1da-4e40-a1bc-33ba39c207e8" containerName="route-controller-manager" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.764028 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef330858-933c-41ce-b34b-db48cd8e8200" containerName="controller-manager" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.764039 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="587f32ef-b1da-4e40-a1bc-33ba39c207e8" containerName="route-controller-manager" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.764590 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.766639 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7"] Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.767508 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.773077 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.773322 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.773478 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.773889 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.774059 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.774310 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.774437 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.774560 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.775159 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.775310 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.775421 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.775533 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.784267 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.796423 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v"] Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.799913 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7"] Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.840473 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8fdw\" (UniqueName: \"kubernetes.io/projected/592a2c33-177f-47eb-88a8-57c5eeef5948-kube-api-access-c8fdw\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.840519 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb96b\" (UniqueName: \"kubernetes.io/projected/0620d4d4-1bd1-4c5e-a14d-98fae7289d59-kube-api-access-fb96b\") pod \"route-controller-manager-7c84d65b4c-bg4w7\" (UID: \"0620d4d4-1bd1-4c5e-a14d-98fae7289d59\") " pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.840677 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0620d4d4-1bd1-4c5e-a14d-98fae7289d59-config\") pod \"route-controller-manager-7c84d65b4c-bg4w7\" (UID: \"0620d4d4-1bd1-4c5e-a14d-98fae7289d59\") " pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.840721 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592a2c33-177f-47eb-88a8-57c5eeef5948-config\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.840776 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/592a2c33-177f-47eb-88a8-57c5eeef5948-serving-cert\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.840814 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0620d4d4-1bd1-4c5e-a14d-98fae7289d59-client-ca\") pod \"route-controller-manager-7c84d65b4c-bg4w7\" (UID: \"0620d4d4-1bd1-4c5e-a14d-98fae7289d59\") " pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.840852 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/592a2c33-177f-47eb-88a8-57c5eeef5948-client-ca\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.840875 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0620d4d4-1bd1-4c5e-a14d-98fae7289d59-serving-cert\") pod \"route-controller-manager-7c84d65b4c-bg4w7\" (UID: \"0620d4d4-1bd1-4c5e-a14d-98fae7289d59\") " pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.840898 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/592a2c33-177f-47eb-88a8-57c5eeef5948-proxy-ca-bundles\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.942136 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0620d4d4-1bd1-4c5e-a14d-98fae7289d59-client-ca\") pod \"route-controller-manager-7c84d65b4c-bg4w7\" (UID: \"0620d4d4-1bd1-4c5e-a14d-98fae7289d59\") " pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.942198 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/592a2c33-177f-47eb-88a8-57c5eeef5948-client-ca\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.942215 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0620d4d4-1bd1-4c5e-a14d-98fae7289d59-serving-cert\") pod \"route-controller-manager-7c84d65b4c-bg4w7\" (UID: \"0620d4d4-1bd1-4c5e-a14d-98fae7289d59\") " pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.942240 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/592a2c33-177f-47eb-88a8-57c5eeef5948-proxy-ca-bundles\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.942261 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8fdw\" (UniqueName: \"kubernetes.io/projected/592a2c33-177f-47eb-88a8-57c5eeef5948-kube-api-access-c8fdw\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.942283 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb96b\" (UniqueName: \"kubernetes.io/projected/0620d4d4-1bd1-4c5e-a14d-98fae7289d59-kube-api-access-fb96b\") pod \"route-controller-manager-7c84d65b4c-bg4w7\" (UID: \"0620d4d4-1bd1-4c5e-a14d-98fae7289d59\") " pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.942332 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0620d4d4-1bd1-4c5e-a14d-98fae7289d59-config\") pod \"route-controller-manager-7c84d65b4c-bg4w7\" (UID: \"0620d4d4-1bd1-4c5e-a14d-98fae7289d59\") " pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.942351 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592a2c33-177f-47eb-88a8-57c5eeef5948-config\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.942384 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/592a2c33-177f-47eb-88a8-57c5eeef5948-serving-cert\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.943846 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0620d4d4-1bd1-4c5e-a14d-98fae7289d59-client-ca\") pod \"route-controller-manager-7c84d65b4c-bg4w7\" (UID: \"0620d4d4-1bd1-4c5e-a14d-98fae7289d59\") " pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.944149 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/592a2c33-177f-47eb-88a8-57c5eeef5948-client-ca\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.944709 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0620d4d4-1bd1-4c5e-a14d-98fae7289d59-config\") pod \"route-controller-manager-7c84d65b4c-bg4w7\" (UID: \"0620d4d4-1bd1-4c5e-a14d-98fae7289d59\") " pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.944907 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592a2c33-177f-47eb-88a8-57c5eeef5948-config\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.945323 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/592a2c33-177f-47eb-88a8-57c5eeef5948-proxy-ca-bundles\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.962800 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/592a2c33-177f-47eb-88a8-57c5eeef5948-serving-cert\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.963197 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0620d4d4-1bd1-4c5e-a14d-98fae7289d59-serving-cert\") pod \"route-controller-manager-7c84d65b4c-bg4w7\" (UID: \"0620d4d4-1bd1-4c5e-a14d-98fae7289d59\") " pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.988770 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8fdw\" (UniqueName: \"kubernetes.io/projected/592a2c33-177f-47eb-88a8-57c5eeef5948-kube-api-access-c8fdw\") pod \"controller-manager-5979ff5fb9-5gq7v\" (UID: \"592a2c33-177f-47eb-88a8-57c5eeef5948\") " pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:26 crc kubenswrapper[4482]: I1125 06:59:26.990909 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb96b\" (UniqueName: \"kubernetes.io/projected/0620d4d4-1bd1-4c5e-a14d-98fae7289d59-kube-api-access-fb96b\") pod \"route-controller-manager-7c84d65b4c-bg4w7\" (UID: \"0620d4d4-1bd1-4c5e-a14d-98fae7289d59\") " pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:27 crc kubenswrapper[4482]: I1125 06:59:27.079801 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:27 crc kubenswrapper[4482]: I1125 06:59:27.085875 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:27 crc kubenswrapper[4482]: I1125 06:59:27.437632 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7"] Nov 25 06:59:27 crc kubenswrapper[4482]: I1125 06:59:27.518001 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v"] Nov 25 06:59:27 crc kubenswrapper[4482]: I1125 06:59:27.648973 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" event={"ID":"592a2c33-177f-47eb-88a8-57c5eeef5948","Type":"ContainerStarted","Data":"32c522c01243e77edc5baa52cf0133bf7fd0dc50a38fb06c53a4473b4eb786e8"} Nov 25 06:59:27 crc kubenswrapper[4482]: I1125 06:59:27.650695 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" event={"ID":"0620d4d4-1bd1-4c5e-a14d-98fae7289d59","Type":"ContainerStarted","Data":"ab4131f9521339c0fb9d5db19835ab78429b76fc9c541df7345cb5f3cd3a7286"} Nov 25 06:59:27 crc kubenswrapper[4482]: I1125 06:59:27.837381 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="587f32ef-b1da-4e40-a1bc-33ba39c207e8" path="/var/lib/kubelet/pods/587f32ef-b1da-4e40-a1bc-33ba39c207e8/volumes" Nov 25 06:59:27 crc kubenswrapper[4482]: I1125 06:59:27.838157 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef330858-933c-41ce-b34b-db48cd8e8200" path="/var/lib/kubelet/pods/ef330858-933c-41ce-b34b-db48cd8e8200/volumes" Nov 25 06:59:28 crc kubenswrapper[4482]: I1125 06:59:28.659080 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" event={"ID":"0620d4d4-1bd1-4c5e-a14d-98fae7289d59","Type":"ContainerStarted","Data":"ce38feb0540e0f618140d001fd2ab18a60cc725e47606a94bd700ebc8e72c8ee"} Nov 25 06:59:28 crc kubenswrapper[4482]: I1125 06:59:28.659528 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:28 crc kubenswrapper[4482]: I1125 06:59:28.661107 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" event={"ID":"592a2c33-177f-47eb-88a8-57c5eeef5948","Type":"ContainerStarted","Data":"afe5c1da92d595618ccc8d3584c3ad4dfc5b021f08e912e203a0d514f4c10b30"} Nov 25 06:59:28 crc kubenswrapper[4482]: I1125 06:59:28.661306 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:28 crc kubenswrapper[4482]: I1125 06:59:28.671744 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" Nov 25 06:59:28 crc kubenswrapper[4482]: I1125 06:59:28.678799 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" Nov 25 06:59:28 crc kubenswrapper[4482]: I1125 06:59:28.777996 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c84d65b4c-bg4w7" podStartSLOduration=3.777979374 podStartE2EDuration="3.777979374s" podCreationTimestamp="2025-11-25 06:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:59:28.776437888 +0000 UTC m=+743.264669148" watchObservedRunningTime="2025-11-25 06:59:28.777979374 +0000 UTC m=+743.266210633" Nov 25 06:59:28 crc kubenswrapper[4482]: I1125 06:59:28.898790 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5979ff5fb9-5gq7v" podStartSLOduration=3.89877114 podStartE2EDuration="3.89877114s" podCreationTimestamp="2025-11-25 06:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 06:59:28.837256027 +0000 UTC m=+743.325487287" watchObservedRunningTime="2025-11-25 06:59:28.89877114 +0000 UTC m=+743.387002398" Nov 25 06:59:34 crc kubenswrapper[4482]: I1125 06:59:34.722379 4482 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.282664 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.283869 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.286123 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mwcmx" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.302003 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.302862 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.306001 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.306584 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jjmk9" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.319750 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.320491 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.322685 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rrmbj" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.331916 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8rkw\" (UniqueName: \"kubernetes.io/projected/4754fff5-c20f-42c5-8c10-bb9975919bf3-kube-api-access-s8rkw\") pod \"barbican-operator-controller-manager-86dc4d89c8-svglr\" (UID: \"4754fff5-c20f-42c5-8c10-bb9975919bf3\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.332061 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpr7k\" (UniqueName: \"kubernetes.io/projected/20c9d02f-1cbc-4c66-84ff-7cbf40bac507-kube-api-access-kpr7k\") pod \"cinder-operator-controller-manager-79856dc55c-r6cc4\" (UID: \"20c9d02f-1cbc-4c66-84ff-7cbf40bac507\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.337096 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.342982 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.402382 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.403459 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.405566 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hk9xb" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.412258 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.413460 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.428402 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.439032 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6smw\" (UniqueName: \"kubernetes.io/projected/a2dcdd81-a863-4453-b1b6-e1824d5444b6-kube-api-access-v6smw\") pod \"designate-operator-controller-manager-7d695c9b56-t4dwf\" (UID: \"a2dcdd81-a863-4453-b1b6-e1824d5444b6\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.439201 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5gp4\" (UniqueName: \"kubernetes.io/projected/2375b89e-398f-45d4-badc-1980cfcda4a1-kube-api-access-w5gp4\") pod \"glance-operator-controller-manager-68b95954c9-2qkzx\" (UID: \"2375b89e-398f-45d4-badc-1980cfcda4a1\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.439530 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8rkw\" (UniqueName: \"kubernetes.io/projected/4754fff5-c20f-42c5-8c10-bb9975919bf3-kube-api-access-s8rkw\") pod \"barbican-operator-controller-manager-86dc4d89c8-svglr\" (UID: \"4754fff5-c20f-42c5-8c10-bb9975919bf3\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.439662 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpr7k\" (UniqueName: \"kubernetes.io/projected/20c9d02f-1cbc-4c66-84ff-7cbf40bac507-kube-api-access-kpr7k\") pod \"cinder-operator-controller-manager-79856dc55c-r6cc4\" (UID: \"20c9d02f-1cbc-4c66-84ff-7cbf40bac507\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.439837 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2mkb\" (UniqueName: \"kubernetes.io/projected/f3eb6724-3ab3-4027-b8e6-3d90c403f13a-kube-api-access-s2mkb\") pod \"heat-operator-controller-manager-774b86978c-t6mdk\" (UID: \"f3eb6724-3ab3-4027-b8e6-3d90c403f13a\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.442788 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.467515 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-zh4xs" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.470354 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4nvnb" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.513237 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8rkw\" (UniqueName: \"kubernetes.io/projected/4754fff5-c20f-42c5-8c10-bb9975919bf3-kube-api-access-s8rkw\") pod \"barbican-operator-controller-manager-86dc4d89c8-svglr\" (UID: \"4754fff5-c20f-42c5-8c10-bb9975919bf3\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.516155 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.529457 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpr7k\" (UniqueName: \"kubernetes.io/projected/20c9d02f-1cbc-4c66-84ff-7cbf40bac507-kube-api-access-kpr7k\") pod \"cinder-operator-controller-manager-79856dc55c-r6cc4\" (UID: \"20c9d02f-1cbc-4c66-84ff-7cbf40bac507\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.537791 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.543612 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5gp4\" (UniqueName: \"kubernetes.io/projected/2375b89e-398f-45d4-badc-1980cfcda4a1-kube-api-access-w5gp4\") pod \"glance-operator-controller-manager-68b95954c9-2qkzx\" (UID: \"2375b89e-398f-45d4-badc-1980cfcda4a1\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.543744 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b6w8\" (UniqueName: \"kubernetes.io/projected/d0b2883e-6d53-465c-ba0c-45173ff59d4b-kube-api-access-2b6w8\") pod \"horizon-operator-controller-manager-68c9694994-tzkbq\" (UID: \"d0b2883e-6d53-465c-ba0c-45173ff59d4b\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.543851 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2mkb\" (UniqueName: \"kubernetes.io/projected/f3eb6724-3ab3-4027-b8e6-3d90c403f13a-kube-api-access-s2mkb\") pod \"heat-operator-controller-manager-774b86978c-t6mdk\" (UID: \"f3eb6724-3ab3-4027-b8e6-3d90c403f13a\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.543928 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6smw\" (UniqueName: \"kubernetes.io/projected/a2dcdd81-a863-4453-b1b6-e1824d5444b6-kube-api-access-v6smw\") pod \"designate-operator-controller-manager-7d695c9b56-t4dwf\" (UID: \"a2dcdd81-a863-4453-b1b6-e1824d5444b6\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.545994 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.549115 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.551894 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.559528 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.562948 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-w2594" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.563097 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.570767 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.570807 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.570953 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.576595 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.578440 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.582601 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-lm9gw" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.583480 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-q59mz" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.600945 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.608638 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2mkb\" (UniqueName: \"kubernetes.io/projected/f3eb6724-3ab3-4027-b8e6-3d90c403f13a-kube-api-access-s2mkb\") pod \"heat-operator-controller-manager-774b86978c-t6mdk\" (UID: \"f3eb6724-3ab3-4027-b8e6-3d90c403f13a\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.611261 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5gp4\" (UniqueName: \"kubernetes.io/projected/2375b89e-398f-45d4-badc-1980cfcda4a1-kube-api-access-w5gp4\") pod \"glance-operator-controller-manager-68b95954c9-2qkzx\" (UID: \"2375b89e-398f-45d4-badc-1980cfcda4a1\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.612798 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6smw\" (UniqueName: \"kubernetes.io/projected/a2dcdd81-a863-4453-b1b6-e1824d5444b6-kube-api-access-v6smw\") pod \"designate-operator-controller-manager-7d695c9b56-t4dwf\" (UID: \"a2dcdd81-a863-4453-b1b6-e1824d5444b6\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.615785 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.634538 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.640895 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.642087 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.645743 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-x2fgb" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.646685 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a5cd60b-13ff-44ea-b256-1e05d03912e4-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-lx6v6\" (UID: \"3a5cd60b-13ff-44ea-b256-1e05d03912e4\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.646732 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqbvx\" (UniqueName: \"kubernetes.io/projected/3a5cd60b-13ff-44ea-b256-1e05d03912e4-kube-api-access-cqbvx\") pod \"infra-operator-controller-manager-d5cc86f4b-lx6v6\" (UID: \"3a5cd60b-13ff-44ea-b256-1e05d03912e4\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.646801 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxshs\" (UniqueName: \"kubernetes.io/projected/3ec6220d-a590-404d-a427-98b94a3910c8-kube-api-access-kxshs\") pod \"ironic-operator-controller-manager-5bfcdc958c-5pr4g\" (UID: \"3ec6220d-a590-404d-a427-98b94a3910c8\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.646852 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b6w8\" (UniqueName: \"kubernetes.io/projected/d0b2883e-6d53-465c-ba0c-45173ff59d4b-kube-api-access-2b6w8\") pod \"horizon-operator-controller-manager-68c9694994-tzkbq\" (UID: \"d0b2883e-6d53-465c-ba0c-45173ff59d4b\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.646964 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8sj4\" (UniqueName: \"kubernetes.io/projected/9dbafcad-7706-4390-9745-238418d06f5c-kube-api-access-l8sj4\") pod \"manila-operator-controller-manager-58bb8d67cc-m5rfx\" (UID: \"9dbafcad-7706-4390-9745-238418d06f5c\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.647410 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.656982 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.657012 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.657940 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.661102 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.661760 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.662560 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-24qbv" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.674607 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.674750 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-r4bvk" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.676684 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b6w8\" (UniqueName: \"kubernetes.io/projected/d0b2883e-6d53-465c-ba0c-45173ff59d4b-kube-api-access-2b6w8\") pod \"horizon-operator-controller-manager-68c9694994-tzkbq\" (UID: \"d0b2883e-6d53-465c-ba0c-45173ff59d4b\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.678532 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.679192 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.679434 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.680524 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-29djj" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.680719 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-vft5t" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.683303 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.689925 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.696231 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.724067 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.730577 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.750878 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.752483 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a5cd60b-13ff-44ea-b256-1e05d03912e4-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-lx6v6\" (UID: \"3a5cd60b-13ff-44ea-b256-1e05d03912e4\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.752520 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qqmf\" (UniqueName: \"kubernetes.io/projected/4012508a-01a7-4e14-812e-7c70b350662a-kube-api-access-5qqmf\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-pv5cc\" (UID: \"4012508a-01a7-4e14-812e-7c70b350662a\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.752543 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqbvx\" (UniqueName: \"kubernetes.io/projected/3a5cd60b-13ff-44ea-b256-1e05d03912e4-kube-api-access-cqbvx\") pod \"infra-operator-controller-manager-d5cc86f4b-lx6v6\" (UID: \"3a5cd60b-13ff-44ea-b256-1e05d03912e4\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.752582 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vknz\" (UniqueName: \"kubernetes.io/projected/4d7476c3-dd4a-4e22-a018-e9a93d53ece5-kube-api-access-8vknz\") pod \"neutron-operator-controller-manager-7c57c8bbc4-jq46h\" (UID: \"4d7476c3-dd4a-4e22-a018-e9a93d53ece5\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.752603 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxshs\" (UniqueName: \"kubernetes.io/projected/3ec6220d-a590-404d-a427-98b94a3910c8-kube-api-access-kxshs\") pod \"ironic-operator-controller-manager-5bfcdc958c-5pr4g\" (UID: \"3ec6220d-a590-404d-a427-98b94a3910c8\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.752623 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk5nd\" (UniqueName: \"kubernetes.io/projected/42e69f15-3b24-4d83-840e-3633c1bb87a3-kube-api-access-kk5nd\") pod \"octavia-operator-controller-manager-fd75fd47d-xtvvg\" (UID: \"42e69f15-3b24-4d83-840e-3633c1bb87a3\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.752650 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4mjj\" (UniqueName: \"kubernetes.io/projected/6ad00506-e452-4f9e-91d3-24b4da4a7104-kube-api-access-k4mjj\") pod \"nova-operator-controller-manager-79556f57fc-2x9vp\" (UID: \"6ad00506-e452-4f9e-91d3-24b4da4a7104\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.752688 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8sj4\" (UniqueName: \"kubernetes.io/projected/9dbafcad-7706-4390-9745-238418d06f5c-kube-api-access-l8sj4\") pod \"manila-operator-controller-manager-58bb8d67cc-m5rfx\" (UID: \"9dbafcad-7706-4390-9745-238418d06f5c\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.752707 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk7q9\" (UniqueName: \"kubernetes.io/projected/4a4c6e25-e4fb-49b7-b757-e82e153fdb24-kube-api-access-hk7q9\") pod \"keystone-operator-controller-manager-748dc6576f-8ttss\" (UID: \"4a4c6e25-e4fb-49b7-b757-e82e153fdb24\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" Nov 25 06:59:38 crc kubenswrapper[4482]: E1125 06:59:38.752824 4482 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 06:59:38 crc kubenswrapper[4482]: E1125 06:59:38.752861 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a5cd60b-13ff-44ea-b256-1e05d03912e4-cert podName:3a5cd60b-13ff-44ea-b256-1e05d03912e4 nodeName:}" failed. No retries permitted until 2025-11-25 06:59:39.25284748 +0000 UTC m=+753.741078739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a5cd60b-13ff-44ea-b256-1e05d03912e4-cert") pod "infra-operator-controller-manager-d5cc86f4b-lx6v6" (UID: "3a5cd60b-13ff-44ea-b256-1e05d03912e4") : secret "infra-operator-webhook-server-cert" not found Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.770035 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.771016 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.780218 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.788697 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.788953 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-59d6s" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.793800 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxshs\" (UniqueName: \"kubernetes.io/projected/3ec6220d-a590-404d-a427-98b94a3910c8-kube-api-access-kxshs\") pod \"ironic-operator-controller-manager-5bfcdc958c-5pr4g\" (UID: \"3ec6220d-a590-404d-a427-98b94a3910c8\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.798407 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.799154 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.816106 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-224hz" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.816597 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.817900 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.819218 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8sj4\" (UniqueName: \"kubernetes.io/projected/9dbafcad-7706-4390-9745-238418d06f5c-kube-api-access-l8sj4\") pod \"manila-operator-controller-manager-58bb8d67cc-m5rfx\" (UID: \"9dbafcad-7706-4390-9745-238418d06f5c\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.820410 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-jhxwv" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.824242 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqbvx\" (UniqueName: \"kubernetes.io/projected/3a5cd60b-13ff-44ea-b256-1e05d03912e4-kube-api-access-cqbvx\") pod \"infra-operator-controller-manager-d5cc86f4b-lx6v6\" (UID: \"3a5cd60b-13ff-44ea-b256-1e05d03912e4\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.826596 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.850422 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.854160 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-f8qvp" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.854481 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qqmf\" (UniqueName: \"kubernetes.io/projected/4012508a-01a7-4e14-812e-7c70b350662a-kube-api-access-5qqmf\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-pv5cc\" (UID: \"4012508a-01a7-4e14-812e-7c70b350662a\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.854533 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvlhd\" (UniqueName: \"kubernetes.io/projected/4a627cd2-d42b-4958-a41c-230dd8246061-kube-api-access-qvlhd\") pod \"swift-operator-controller-manager-6fdc4fcf86-5zxlt\" (UID: \"4a627cd2-d42b-4958-a41c-230dd8246061\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.854570 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vknz\" (UniqueName: \"kubernetes.io/projected/4d7476c3-dd4a-4e22-a018-e9a93d53ece5-kube-api-access-8vknz\") pod \"neutron-operator-controller-manager-7c57c8bbc4-jq46h\" (UID: \"4d7476c3-dd4a-4e22-a018-e9a93d53ece5\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.854601 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsmsf\" (UniqueName: \"kubernetes.io/projected/ee690930-78a0-4f7d-be10-feee0cf523d7-kube-api-access-tsmsf\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-tlwch\" (UID: \"ee690930-78a0-4f7d-be10-feee0cf523d7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.854621 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk5nd\" (UniqueName: \"kubernetes.io/projected/42e69f15-3b24-4d83-840e-3633c1bb87a3-kube-api-access-kk5nd\") pod \"octavia-operator-controller-manager-fd75fd47d-xtvvg\" (UID: \"42e69f15-3b24-4d83-840e-3633c1bb87a3\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.854647 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpxdn\" (UniqueName: \"kubernetes.io/projected/1af05cb8-e059-49d7-91dc-17bfecaec8db-kube-api-access-kpxdn\") pod \"ovn-operator-controller-manager-66cf5c67ff-2cfdk\" (UID: \"1af05cb8-e059-49d7-91dc-17bfecaec8db\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.854669 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4mjj\" (UniqueName: \"kubernetes.io/projected/6ad00506-e452-4f9e-91d3-24b4da4a7104-kube-api-access-k4mjj\") pod \"nova-operator-controller-manager-79556f57fc-2x9vp\" (UID: \"6ad00506-e452-4f9e-91d3-24b4da4a7104\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.854701 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee690930-78a0-4f7d-be10-feee0cf523d7-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-tlwch\" (UID: \"ee690930-78a0-4f7d-be10-feee0cf523d7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.854741 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk7q9\" (UniqueName: \"kubernetes.io/projected/4a4c6e25-e4fb-49b7-b757-e82e153fdb24-kube-api-access-hk7q9\") pod \"keystone-operator-controller-manager-748dc6576f-8ttss\" (UID: \"4a4c6e25-e4fb-49b7-b757-e82e153fdb24\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.894285 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4mjj\" (UniqueName: \"kubernetes.io/projected/6ad00506-e452-4f9e-91d3-24b4da4a7104-kube-api-access-k4mjj\") pod \"nova-operator-controller-manager-79556f57fc-2x9vp\" (UID: \"6ad00506-e452-4f9e-91d3-24b4da4a7104\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.895201 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.895651 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.921883 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk7q9\" (UniqueName: \"kubernetes.io/projected/4a4c6e25-e4fb-49b7-b757-e82e153fdb24-kube-api-access-hk7q9\") pod \"keystone-operator-controller-manager-748dc6576f-8ttss\" (UID: \"4a4c6e25-e4fb-49b7-b757-e82e153fdb24\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.947566 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt"] Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.962070 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5fpd\" (UniqueName: \"kubernetes.io/projected/3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a-kube-api-access-c5fpd\") pod \"placement-operator-controller-manager-5db546f9d9-k8drr\" (UID: \"3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.962132 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvlhd\" (UniqueName: \"kubernetes.io/projected/4a627cd2-d42b-4958-a41c-230dd8246061-kube-api-access-qvlhd\") pod \"swift-operator-controller-manager-6fdc4fcf86-5zxlt\" (UID: \"4a627cd2-d42b-4958-a41c-230dd8246061\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.969326 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.972527 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsmsf\" (UniqueName: \"kubernetes.io/projected/ee690930-78a0-4f7d-be10-feee0cf523d7-kube-api-access-tsmsf\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-tlwch\" (UID: \"ee690930-78a0-4f7d-be10-feee0cf523d7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.972585 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpxdn\" (UniqueName: \"kubernetes.io/projected/1af05cb8-e059-49d7-91dc-17bfecaec8db-kube-api-access-kpxdn\") pod \"ovn-operator-controller-manager-66cf5c67ff-2cfdk\" (UID: \"1af05cb8-e059-49d7-91dc-17bfecaec8db\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.972639 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee690930-78a0-4f7d-be10-feee0cf523d7-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-tlwch\" (UID: \"ee690930-78a0-4f7d-be10-feee0cf523d7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 06:59:38 crc kubenswrapper[4482]: E1125 06:59:38.975871 4482 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 06:59:38 crc kubenswrapper[4482]: E1125 06:59:38.975933 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee690930-78a0-4f7d-be10-feee0cf523d7-cert podName:ee690930-78a0-4f7d-be10-feee0cf523d7 nodeName:}" failed. No retries permitted until 2025-11-25 06:59:39.47591684 +0000 UTC m=+753.964148099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee690930-78a0-4f7d-be10-feee0cf523d7-cert") pod "openstack-baremetal-operator-controller-manager-b58f89467-tlwch" (UID: "ee690930-78a0-4f7d-be10-feee0cf523d7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.985490 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vknz\" (UniqueName: \"kubernetes.io/projected/4d7476c3-dd4a-4e22-a018-e9a93d53ece5-kube-api-access-8vknz\") pod \"neutron-operator-controller-manager-7c57c8bbc4-jq46h\" (UID: \"4d7476c3-dd4a-4e22-a018-e9a93d53ece5\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" Nov 25 06:59:38 crc kubenswrapper[4482]: I1125 06:59:38.987476 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qqmf\" (UniqueName: \"kubernetes.io/projected/4012508a-01a7-4e14-812e-7c70b350662a-kube-api-access-5qqmf\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-pv5cc\" (UID: \"4012508a-01a7-4e14-812e-7c70b350662a\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.000771 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.004657 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk5nd\" (UniqueName: \"kubernetes.io/projected/42e69f15-3b24-4d83-840e-3633c1bb87a3-kube-api-access-kk5nd\") pod \"octavia-operator-controller-manager-fd75fd47d-xtvvg\" (UID: \"42e69f15-3b24-4d83-840e-3633c1bb87a3\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.027584 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.036419 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.052077 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.052676 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.057146 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpxdn\" (UniqueName: \"kubernetes.io/projected/1af05cb8-e059-49d7-91dc-17bfecaec8db-kube-api-access-kpxdn\") pod \"ovn-operator-controller-manager-66cf5c67ff-2cfdk\" (UID: \"1af05cb8-e059-49d7-91dc-17bfecaec8db\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.059081 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.075936 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.082806 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5fpd\" (UniqueName: \"kubernetes.io/projected/3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a-kube-api-access-c5fpd\") pod \"placement-operator-controller-manager-5db546f9d9-k8drr\" (UID: \"3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.094336 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.096162 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.099376 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsmsf\" (UniqueName: \"kubernetes.io/projected/ee690930-78a0-4f7d-be10-feee0cf523d7-kube-api-access-tsmsf\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-tlwch\" (UID: \"ee690930-78a0-4f7d-be10-feee0cf523d7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.112921 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvlhd\" (UniqueName: \"kubernetes.io/projected/4a627cd2-d42b-4958-a41c-230dd8246061-kube-api-access-qvlhd\") pod \"swift-operator-controller-manager-6fdc4fcf86-5zxlt\" (UID: \"4a627cd2-d42b-4958-a41c-230dd8246061\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.114279 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-2hc7q" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.118852 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.118906 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.118954 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.119603 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18fd7402468da26f930d0a283cd4f3dcbe4ac307cf8525f069560121b3739a9f"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.119652 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://18fd7402468da26f930d0a283cd4f3dcbe4ac307cf8525f069560121b3739a9f" gracePeriod=600 Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.146159 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.150269 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-s25q8"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.151644 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.157717 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5fpd\" (UniqueName: \"kubernetes.io/projected/3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a-kube-api-access-c5fpd\") pod \"placement-operator-controller-manager-5db546f9d9-k8drr\" (UID: \"3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.162145 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.162203 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-m7kcf"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.163189 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.165564 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-bnrld" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.176695 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.177499 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-nrnxx" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.188236 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-s25q8"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.195127 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwxsc\" (UniqueName: \"kubernetes.io/projected/4be124a3-1fa2-455c-834f-01e66fc326b3-kube-api-access-pwxsc\") pod \"telemetry-operator-controller-manager-567f98c9d-zdvcm\" (UID: \"4be124a3-1fa2-455c-834f-01e66fc326b3\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.195283 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwxw\" (UniqueName: \"kubernetes.io/projected/7059a6d7-9dca-499a-9110-e8dafb53935b-kube-api-access-2hwxw\") pod \"test-operator-controller-manager-5cb74df96-s25q8\" (UID: \"7059a6d7-9dca-499a-9110-e8dafb53935b\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.202001 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-m7kcf"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.224454 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.302050 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hwxw\" (UniqueName: \"kubernetes.io/projected/7059a6d7-9dca-499a-9110-e8dafb53935b-kube-api-access-2hwxw\") pod \"test-operator-controller-manager-5cb74df96-s25q8\" (UID: \"7059a6d7-9dca-499a-9110-e8dafb53935b\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.302101 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dctdt\" (UniqueName: \"kubernetes.io/projected/4ab40028-48ce-48f7-bbd4-97b1bed0cf4c-kube-api-access-dctdt\") pod \"watcher-operator-controller-manager-864885998-m7kcf\" (UID: \"4ab40028-48ce-48f7-bbd4-97b1bed0cf4c\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.302179 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwxsc\" (UniqueName: \"kubernetes.io/projected/4be124a3-1fa2-455c-834f-01e66fc326b3-kube-api-access-pwxsc\") pod \"telemetry-operator-controller-manager-567f98c9d-zdvcm\" (UID: \"4be124a3-1fa2-455c-834f-01e66fc326b3\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.302201 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a5cd60b-13ff-44ea-b256-1e05d03912e4-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-lx6v6\" (UID: \"3a5cd60b-13ff-44ea-b256-1e05d03912e4\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 06:59:39 crc kubenswrapper[4482]: E1125 06:59:39.302344 4482 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 06:59:39 crc kubenswrapper[4482]: E1125 06:59:39.302389 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a5cd60b-13ff-44ea-b256-1e05d03912e4-cert podName:3a5cd60b-13ff-44ea-b256-1e05d03912e4 nodeName:}" failed. No retries permitted until 2025-11-25 06:59:40.302376019 +0000 UTC m=+754.790607278 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a5cd60b-13ff-44ea-b256-1e05d03912e4-cert") pod "infra-operator-controller-manager-d5cc86f4b-lx6v6" (UID: "3a5cd60b-13ff-44ea-b256-1e05d03912e4") : secret "infra-operator-webhook-server-cert" not found Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.330521 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwxsc\" (UniqueName: \"kubernetes.io/projected/4be124a3-1fa2-455c-834f-01e66fc326b3-kube-api-access-pwxsc\") pod \"telemetry-operator-controller-manager-567f98c9d-zdvcm\" (UID: \"4be124a3-1fa2-455c-834f-01e66fc326b3\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.344033 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hwxw\" (UniqueName: \"kubernetes.io/projected/7059a6d7-9dca-499a-9110-e8dafb53935b-kube-api-access-2hwxw\") pod \"test-operator-controller-manager-5cb74df96-s25q8\" (UID: \"7059a6d7-9dca-499a-9110-e8dafb53935b\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.381495 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.384059 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.387631 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.387770 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.387997 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-6zp6c" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.396943 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.416100 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dctdt\" (UniqueName: \"kubernetes.io/projected/4ab40028-48ce-48f7-bbd4-97b1bed0cf4c-kube-api-access-dctdt\") pod \"watcher-operator-controller-manager-864885998-m7kcf\" (UID: \"4ab40028-48ce-48f7-bbd4-97b1bed0cf4c\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.468151 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.469135 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dctdt\" (UniqueName: \"kubernetes.io/projected/4ab40028-48ce-48f7-bbd4-97b1bed0cf4c-kube-api-access-dctdt\") pod \"watcher-operator-controller-manager-864885998-m7kcf\" (UID: \"4ab40028-48ce-48f7-bbd4-97b1bed0cf4c\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.538671 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.539622 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.539733 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee690930-78a0-4f7d-be10-feee0cf523d7-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-tlwch\" (UID: \"ee690930-78a0-4f7d-be10-feee0cf523d7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.539815 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.539927 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhfpv\" (UniqueName: \"kubernetes.io/projected/004e08bd-55ee-4702-88b6-69bd67a32610-kube-api-access-jhfpv\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:39 crc kubenswrapper[4482]: E1125 06:59:39.540112 4482 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 06:59:39 crc kubenswrapper[4482]: E1125 06:59:39.540210 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee690930-78a0-4f7d-be10-feee0cf523d7-cert podName:ee690930-78a0-4f7d-be10-feee0cf523d7 nodeName:}" failed. No retries permitted until 2025-11-25 06:59:40.540198072 +0000 UTC m=+755.028429320 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee690930-78a0-4f7d-be10-feee0cf523d7-cert") pod "openstack-baremetal-operator-controller-manager-b58f89467-tlwch" (UID: "ee690930-78a0-4f7d-be10-feee0cf523d7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.540583 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.614446 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.615495 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.626460 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-rmbqp" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.640357 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.641639 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.641692 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.641790 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhfpv\" (UniqueName: \"kubernetes.io/projected/004e08bd-55ee-4702-88b6-69bd67a32610-kube-api-access-jhfpv\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:39 crc kubenswrapper[4482]: E1125 06:59:39.642256 4482 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 06:59:39 crc kubenswrapper[4482]: E1125 06:59:39.642319 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-metrics-certs podName:004e08bd-55ee-4702-88b6-69bd67a32610 nodeName:}" failed. No retries permitted until 2025-11-25 06:59:40.142287741 +0000 UTC m=+754.630519000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-metrics-certs") pod "openstack-operator-controller-manager-7cd5954d9-kmdnq" (UID: "004e08bd-55ee-4702-88b6-69bd67a32610") : secret "metrics-server-cert" not found Nov 25 06:59:39 crc kubenswrapper[4482]: E1125 06:59:39.642481 4482 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 06:59:39 crc kubenswrapper[4482]: E1125 06:59:39.642503 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-webhook-certs podName:004e08bd-55ee-4702-88b6-69bd67a32610 nodeName:}" failed. No retries permitted until 2025-11-25 06:59:40.142496946 +0000 UTC m=+754.630728204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-webhook-certs") pod "openstack-operator-controller-manager-7cd5954d9-kmdnq" (UID: "004e08bd-55ee-4702-88b6-69bd67a32610") : secret "webhook-server-cert" not found Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.645021 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.666355 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.671354 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhfpv\" (UniqueName: \"kubernetes.io/projected/004e08bd-55ee-4702-88b6-69bd67a32610-kube-api-access-jhfpv\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.720131 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf"] Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.747975 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg869\" (UniqueName: \"kubernetes.io/projected/337411b1-ff37-4370-ad36-415f816f5d07-kube-api-access-wg869\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4mr9n\" (UID: \"337411b1-ff37-4370-ad36-415f816f5d07\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.800717 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" event={"ID":"4754fff5-c20f-42c5-8c10-bb9975919bf3","Type":"ContainerStarted","Data":"18a49a0648aab79837f5b35b9e265714281aeddb890fccff6595e8836fea8297"} Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.807921 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="18fd7402468da26f930d0a283cd4f3dcbe4ac307cf8525f069560121b3739a9f" exitCode=0 Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.807947 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"18fd7402468da26f930d0a283cd4f3dcbe4ac307cf8525f069560121b3739a9f"} Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.807973 4482 scope.go:117] "RemoveContainer" containerID="d84812a555ffdedafcf55f0c474a9703c65d1fb93d154179be65ddf6b69c96ac" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.849895 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg869\" (UniqueName: \"kubernetes.io/projected/337411b1-ff37-4370-ad36-415f816f5d07-kube-api-access-wg869\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4mr9n\" (UID: \"337411b1-ff37-4370-ad36-415f816f5d07\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.873970 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg869\" (UniqueName: \"kubernetes.io/projected/337411b1-ff37-4370-ad36-415f816f5d07-kube-api-access-wg869\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4mr9n\" (UID: \"337411b1-ff37-4370-ad36-415f816f5d07\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" Nov 25 06:59:39 crc kubenswrapper[4482]: I1125 06:59:39.972375 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.158357 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.158674 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:40 crc kubenswrapper[4482]: E1125 06:59:40.158813 4482 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 06:59:40 crc kubenswrapper[4482]: E1125 06:59:40.158869 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-webhook-certs podName:004e08bd-55ee-4702-88b6-69bd67a32610 nodeName:}" failed. No retries permitted until 2025-11-25 06:59:41.158850612 +0000 UTC m=+755.647081871 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-webhook-certs") pod "openstack-operator-controller-manager-7cd5954d9-kmdnq" (UID: "004e08bd-55ee-4702-88b6-69bd67a32610") : secret "webhook-server-cert" not found Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.167988 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.364245 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a5cd60b-13ff-44ea-b256-1e05d03912e4-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-lx6v6\" (UID: \"3a5cd60b-13ff-44ea-b256-1e05d03912e4\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.367694 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a5cd60b-13ff-44ea-b256-1e05d03912e4-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-lx6v6\" (UID: \"3a5cd60b-13ff-44ea-b256-1e05d03912e4\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.372895 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.435739 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk"] Nov 25 06:59:40 crc kubenswrapper[4482]: W1125 06:59:40.442615 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3eb6724_3ab3_4027_b8e6_3d90c403f13a.slice/crio-6fdf0848038aad3d218f2208ad726a70cc211e5c9a529b2db54f0a413b2bc994 WatchSource:0}: Error finding container 6fdf0848038aad3d218f2208ad726a70cc211e5c9a529b2db54f0a413b2bc994: Status 404 returned error can't find the container with id 6fdf0848038aad3d218f2208ad726a70cc211e5c9a529b2db54f0a413b2bc994 Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.480410 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx"] Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.484760 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g"] Nov 25 06:59:40 crc kubenswrapper[4482]: W1125 06:59:40.494282 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2375b89e_398f_45d4_badc_1980cfcda4a1.slice/crio-14b8636f7cd3692452498ff41b64a2152135e2e319852f2be4a73b7f9b48122c WatchSource:0}: Error finding container 14b8636f7cd3692452498ff41b64a2152135e2e319852f2be4a73b7f9b48122c: Status 404 returned error can't find the container with id 14b8636f7cd3692452498ff41b64a2152135e2e319852f2be4a73b7f9b48122c Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.499646 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx"] Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.567548 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee690930-78a0-4f7d-be10-feee0cf523d7-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-tlwch\" (UID: \"ee690930-78a0-4f7d-be10-feee0cf523d7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 06:59:40 crc kubenswrapper[4482]: E1125 06:59:40.568278 4482 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 06:59:40 crc kubenswrapper[4482]: E1125 06:59:40.568415 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee690930-78a0-4f7d-be10-feee0cf523d7-cert podName:ee690930-78a0-4f7d-be10-feee0cf523d7 nodeName:}" failed. No retries permitted until 2025-11-25 06:59:42.568390276 +0000 UTC m=+757.056621535 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee690930-78a0-4f7d-be10-feee0cf523d7-cert") pod "openstack-baremetal-operator-controller-manager-b58f89467-tlwch" (UID: "ee690930-78a0-4f7d-be10-feee0cf523d7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.757421 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss"] Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.768218 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg"] Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.774288 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq"] Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.779211 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk"] Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.783299 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc"] Nov 25 06:59:40 crc kubenswrapper[4482]: W1125 06:59:40.788462 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a4c6e25_e4fb_49b7_b757_e82e153fdb24.slice/crio-d85f854ed7a342a287ac752ca0f9b14bbc73552e917d6f239f22f76a11b23f54 WatchSource:0}: Error finding container d85f854ed7a342a287ac752ca0f9b14bbc73552e917d6f239f22f76a11b23f54: Status 404 returned error can't find the container with id d85f854ed7a342a287ac752ca0f9b14bbc73552e917d6f239f22f76a11b23f54 Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.833974 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" event={"ID":"42e69f15-3b24-4d83-840e-3633c1bb87a3","Type":"ContainerStarted","Data":"5b404046460196fcd81eb61c6dfea3386c998d8f456dc57473a68f5fd1c6a8aa"} Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.841249 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"6be423e1d99d845691f688b98451ff731b0a6e0f033aa86bb907250d322d441c"} Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.853219 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" event={"ID":"2375b89e-398f-45d4-badc-1980cfcda4a1","Type":"ContainerStarted","Data":"14b8636f7cd3692452498ff41b64a2152135e2e319852f2be4a73b7f9b48122c"} Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.856405 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" event={"ID":"d0b2883e-6d53-465c-ba0c-45173ff59d4b","Type":"ContainerStarted","Data":"e42b9f03cf83967a3f2d11f8fe16eb1bae95e101c29b736c70c7cb8400204033"} Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.859438 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" event={"ID":"20c9d02f-1cbc-4c66-84ff-7cbf40bac507","Type":"ContainerStarted","Data":"cabc810f51664c989d9646f0670c03c6462582f0abe28a4bec89e22cd1f1f620"} Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.860410 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" event={"ID":"a2dcdd81-a863-4453-b1b6-e1824d5444b6","Type":"ContainerStarted","Data":"f167e60351d23f4cb33c7e0da070b8f4ae6720b782674b962fb52836adf16752"} Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.862510 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" event={"ID":"4a4c6e25-e4fb-49b7-b757-e82e153fdb24","Type":"ContainerStarted","Data":"d85f854ed7a342a287ac752ca0f9b14bbc73552e917d6f239f22f76a11b23f54"} Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.863942 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" event={"ID":"1af05cb8-e059-49d7-91dc-17bfecaec8db","Type":"ContainerStarted","Data":"c392a09677e548da23f917862f6a92a2b445c6e0a21677e97c9ff4aaa80d1636"} Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.866611 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" event={"ID":"3ec6220d-a590-404d-a427-98b94a3910c8","Type":"ContainerStarted","Data":"56362b9475769061281e48648cbd3087632f8d5b9e9e657ccd00dd93345526cf"} Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.871521 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" event={"ID":"9dbafcad-7706-4390-9745-238418d06f5c","Type":"ContainerStarted","Data":"e5f44841177565a86e2311b146372327e94d25db4e7c5155ac4dd92c3b97f5a8"} Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.873569 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" event={"ID":"f3eb6724-3ab3-4027-b8e6-3d90c403f13a","Type":"ContainerStarted","Data":"6fdf0848038aad3d218f2208ad726a70cc211e5c9a529b2db54f0a413b2bc994"} Nov 25 06:59:40 crc kubenswrapper[4482]: I1125 06:59:40.874827 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" event={"ID":"4012508a-01a7-4e14-812e-7c70b350662a","Type":"ContainerStarted","Data":"1f21ca1b7281b067794f49fcdff26d35793028928456c40a75be0b5eba6941ad"} Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.180849 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.181236 4482 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.181376 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-webhook-certs podName:004e08bd-55ee-4702-88b6-69bd67a32610 nodeName:}" failed. No retries permitted until 2025-11-25 06:59:43.181357182 +0000 UTC m=+757.669588440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-webhook-certs") pod "openstack-operator-controller-manager-7cd5954d9-kmdnq" (UID: "004e08bd-55ee-4702-88b6-69bd67a32610") : secret "webhook-server-cert" not found Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.274789 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-s25q8"] Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.299385 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr"] Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.306686 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp"] Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.339110 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt"] Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.344357 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm"] Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.351696 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-m7kcf"] Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.354518 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h"] Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.361928 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6"] Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.365116 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n"] Nov 25 06:59:41 crc kubenswrapper[4482]: W1125 06:59:41.388552 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ab40028_48ce_48f7_bbd4_97b1bed0cf4c.slice/crio-656ff6257c5b6b2270c9a885df67af9d0b03ff5bfced6a418a940331fc1f2315 WatchSource:0}: Error finding container 656ff6257c5b6b2270c9a885df67af9d0b03ff5bfced6a418a940331fc1f2315: Status 404 returned error can't find the container with id 656ff6257c5b6b2270c9a885df67af9d0b03ff5bfced6a418a940331fc1f2315 Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.390148 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dctdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-m7kcf_openstack-operators(4ab40028-48ce-48f7-bbd4-97b1bed0cf4c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 06:59:41 crc kubenswrapper[4482]: W1125 06:59:41.391415 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d7476c3_dd4a_4e22_a018_e9a93d53ece5.slice/crio-7530c2835673599195913843d89cd38ae3f8b110709060dcd40f08cc152db0ce WatchSource:0}: Error finding container 7530c2835673599195913843d89cd38ae3f8b110709060dcd40f08cc152db0ce: Status 404 returned error can't find the container with id 7530c2835673599195913843d89cd38ae3f8b110709060dcd40f08cc152db0ce Nov 25 06:59:41 crc kubenswrapper[4482]: W1125 06:59:41.393115 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod337411b1_ff37_4370_ad36_415f816f5d07.slice/crio-1e90fcae37a126596546e3c1b6c83013399e47ddb8f8657abe64e5afc5d7b35a WatchSource:0}: Error finding container 1e90fcae37a126596546e3c1b6c83013399e47ddb8f8657abe64e5afc5d7b35a: Status 404 returned error can't find the container with id 1e90fcae37a126596546e3c1b6c83013399e47ddb8f8657abe64e5afc5d7b35a Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.394998 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dctdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-m7kcf_openstack-operators(4ab40028-48ce-48f7-bbd4-97b1bed0cf4c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.395968 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wg869,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4mr9n_openstack-operators(337411b1-ff37-4370-ad36-415f816f5d07): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.396919 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" podUID="4ab40028-48ce-48f7-bbd4-97b1bed0cf4c" Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.397113 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" podUID="337411b1-ff37-4370-ad36-415f816f5d07" Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.401048 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vknz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-jq46h_openstack-operators(4d7476c3-dd4a-4e22-a018-e9a93d53ece5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 06:59:41 crc kubenswrapper[4482]: W1125 06:59:41.407700 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a5cd60b_13ff_44ea_b256_1e05d03912e4.slice/crio-029bb8e30f8d015855eeeb45b56af46911128ee46c8200dc1f9d989d473c9a61 WatchSource:0}: Error finding container 029bb8e30f8d015855eeeb45b56af46911128ee46c8200dc1f9d989d473c9a61: Status 404 returned error can't find the container with id 029bb8e30f8d015855eeeb45b56af46911128ee46c8200dc1f9d989d473c9a61 Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.407893 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vknz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-jq46h_openstack-operators(4d7476c3-dd4a-4e22-a018-e9a93d53ece5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.409070 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" podUID="4d7476c3-dd4a-4e22-a018-e9a93d53ece5" Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.411632 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqbvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-d5cc86f4b-lx6v6_openstack-operators(3a5cd60b-13ff-44ea-b256-1e05d03912e4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.413706 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cqbvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-d5cc86f4b-lx6v6_openstack-operators(3a5cd60b-13ff-44ea-b256-1e05d03912e4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.414947 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" podUID="3a5cd60b-13ff-44ea-b256-1e05d03912e4" Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.891548 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" event={"ID":"4a627cd2-d42b-4958-a41c-230dd8246061","Type":"ContainerStarted","Data":"d122164a311b463b13555cf175a1feced50ec66307dda2f6758e94720dd99835"} Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.898095 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" event={"ID":"4be124a3-1fa2-455c-834f-01e66fc326b3","Type":"ContainerStarted","Data":"6f03e59b92d8837c1a7cf69b5c0852328744d0603dd842cd1e1db809af8ad4c0"} Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.899644 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" event={"ID":"3a5cd60b-13ff-44ea-b256-1e05d03912e4","Type":"ContainerStarted","Data":"029bb8e30f8d015855eeeb45b56af46911128ee46c8200dc1f9d989d473c9a61"} Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.906056 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" event={"ID":"337411b1-ff37-4370-ad36-415f816f5d07","Type":"ContainerStarted","Data":"1e90fcae37a126596546e3c1b6c83013399e47ddb8f8657abe64e5afc5d7b35a"} Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.906215 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" podUID="3a5cd60b-13ff-44ea-b256-1e05d03912e4" Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.908884 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" event={"ID":"4d7476c3-dd4a-4e22-a018-e9a93d53ece5","Type":"ContainerStarted","Data":"7530c2835673599195913843d89cd38ae3f8b110709060dcd40f08cc152db0ce"} Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.910753 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" event={"ID":"3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a","Type":"ContainerStarted","Data":"635d3b5d61591d093038326cbad954500207485a376b5fd585b7c24f319c52fc"} Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.913465 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" podUID="337411b1-ff37-4370-ad36-415f816f5d07" Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.923810 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" podUID="4d7476c3-dd4a-4e22-a018-e9a93d53ece5" Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.924349 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" event={"ID":"7059a6d7-9dca-499a-9110-e8dafb53935b","Type":"ContainerStarted","Data":"5f46a8d32605f918f0763efb9af0dbe05f87666e7556a4da6ae66d612a79a9e4"} Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.928619 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" event={"ID":"4ab40028-48ce-48f7-bbd4-97b1bed0cf4c","Type":"ContainerStarted","Data":"656ff6257c5b6b2270c9a885df67af9d0b03ff5bfced6a418a940331fc1f2315"} Nov 25 06:59:41 crc kubenswrapper[4482]: E1125 06:59:41.932044 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" podUID="4ab40028-48ce-48f7-bbd4-97b1bed0cf4c" Nov 25 06:59:41 crc kubenswrapper[4482]: I1125 06:59:41.932832 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" event={"ID":"6ad00506-e452-4f9e-91d3-24b4da4a7104","Type":"ContainerStarted","Data":"cec1d7a33ddf73b21c8b5ffe45583e15c3e5b038b9b7696de4eb0983b1b71e13"} Nov 25 06:59:42 crc kubenswrapper[4482]: I1125 06:59:42.628021 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee690930-78a0-4f7d-be10-feee0cf523d7-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-tlwch\" (UID: \"ee690930-78a0-4f7d-be10-feee0cf523d7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 06:59:42 crc kubenswrapper[4482]: I1125 06:59:42.651932 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee690930-78a0-4f7d-be10-feee0cf523d7-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-tlwch\" (UID: \"ee690930-78a0-4f7d-be10-feee0cf523d7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 06:59:42 crc kubenswrapper[4482]: I1125 06:59:42.717911 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 06:59:42 crc kubenswrapper[4482]: E1125 06:59:42.943327 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" podUID="337411b1-ff37-4370-ad36-415f816f5d07" Nov 25 06:59:42 crc kubenswrapper[4482]: E1125 06:59:42.943697 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" podUID="4d7476c3-dd4a-4e22-a018-e9a93d53ece5" Nov 25 06:59:42 crc kubenswrapper[4482]: E1125 06:59:42.946512 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" podUID="4ab40028-48ce-48f7-bbd4-97b1bed0cf4c" Nov 25 06:59:42 crc kubenswrapper[4482]: E1125 06:59:42.946562 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" podUID="3a5cd60b-13ff-44ea-b256-1e05d03912e4" Nov 25 06:59:43 crc kubenswrapper[4482]: I1125 06:59:43.234641 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:43 crc kubenswrapper[4482]: I1125 06:59:43.240290 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/004e08bd-55ee-4702-88b6-69bd67a32610-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-kmdnq\" (UID: \"004e08bd-55ee-4702-88b6-69bd67a32610\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:43 crc kubenswrapper[4482]: I1125 06:59:43.329615 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.601256 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wdff4"] Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.603193 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wdff4" Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.604079 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wdff4"] Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.697815 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079e38f7-2ae4-43ed-a466-09930f83d081-utilities\") pod \"community-operators-wdff4\" (UID: \"079e38f7-2ae4-43ed-a466-09930f83d081\") " pod="openshift-marketplace/community-operators-wdff4" Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.697912 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079e38f7-2ae4-43ed-a466-09930f83d081-catalog-content\") pod \"community-operators-wdff4\" (UID: \"079e38f7-2ae4-43ed-a466-09930f83d081\") " pod="openshift-marketplace/community-operators-wdff4" Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.697934 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwcw2\" (UniqueName: \"kubernetes.io/projected/079e38f7-2ae4-43ed-a466-09930f83d081-kube-api-access-wwcw2\") pod \"community-operators-wdff4\" (UID: \"079e38f7-2ae4-43ed-a466-09930f83d081\") " pod="openshift-marketplace/community-operators-wdff4" Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.799472 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079e38f7-2ae4-43ed-a466-09930f83d081-utilities\") pod \"community-operators-wdff4\" (UID: \"079e38f7-2ae4-43ed-a466-09930f83d081\") " pod="openshift-marketplace/community-operators-wdff4" Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.799545 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079e38f7-2ae4-43ed-a466-09930f83d081-catalog-content\") pod \"community-operators-wdff4\" (UID: \"079e38f7-2ae4-43ed-a466-09930f83d081\") " pod="openshift-marketplace/community-operators-wdff4" Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.799564 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwcw2\" (UniqueName: \"kubernetes.io/projected/079e38f7-2ae4-43ed-a466-09930f83d081-kube-api-access-wwcw2\") pod \"community-operators-wdff4\" (UID: \"079e38f7-2ae4-43ed-a466-09930f83d081\") " pod="openshift-marketplace/community-operators-wdff4" Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.801008 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079e38f7-2ae4-43ed-a466-09930f83d081-catalog-content\") pod \"community-operators-wdff4\" (UID: \"079e38f7-2ae4-43ed-a466-09930f83d081\") " pod="openshift-marketplace/community-operators-wdff4" Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.802511 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079e38f7-2ae4-43ed-a466-09930f83d081-utilities\") pod \"community-operators-wdff4\" (UID: \"079e38f7-2ae4-43ed-a466-09930f83d081\") " pod="openshift-marketplace/community-operators-wdff4" Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.823191 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwcw2\" (UniqueName: \"kubernetes.io/projected/079e38f7-2ae4-43ed-a466-09930f83d081-kube-api-access-wwcw2\") pod \"community-operators-wdff4\" (UID: \"079e38f7-2ae4-43ed-a466-09930f83d081\") " pod="openshift-marketplace/community-operators-wdff4" Nov 25 06:59:46 crc kubenswrapper[4482]: I1125 06:59:46.923043 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wdff4" Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.254562 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zcgrk"] Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.257759 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.271861 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcgrk"] Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.364590 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add1b1a6-f427-464f-93f1-4f2f2cd92e43-utilities\") pod \"redhat-marketplace-zcgrk\" (UID: \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\") " pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.364651 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kmdx\" (UniqueName: \"kubernetes.io/projected/add1b1a6-f427-464f-93f1-4f2f2cd92e43-kube-api-access-7kmdx\") pod \"redhat-marketplace-zcgrk\" (UID: \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\") " pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.364732 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add1b1a6-f427-464f-93f1-4f2f2cd92e43-catalog-content\") pod \"redhat-marketplace-zcgrk\" (UID: \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\") " pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.465995 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add1b1a6-f427-464f-93f1-4f2f2cd92e43-catalog-content\") pod \"redhat-marketplace-zcgrk\" (UID: \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\") " pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.466208 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add1b1a6-f427-464f-93f1-4f2f2cd92e43-utilities\") pod \"redhat-marketplace-zcgrk\" (UID: \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\") " pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.466313 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kmdx\" (UniqueName: \"kubernetes.io/projected/add1b1a6-f427-464f-93f1-4f2f2cd92e43-kube-api-access-7kmdx\") pod \"redhat-marketplace-zcgrk\" (UID: \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\") " pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.466683 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add1b1a6-f427-464f-93f1-4f2f2cd92e43-catalog-content\") pod \"redhat-marketplace-zcgrk\" (UID: \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\") " pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.466706 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add1b1a6-f427-464f-93f1-4f2f2cd92e43-utilities\") pod \"redhat-marketplace-zcgrk\" (UID: \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\") " pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.489936 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kmdx\" (UniqueName: \"kubernetes.io/projected/add1b1a6-f427-464f-93f1-4f2f2cd92e43-kube-api-access-7kmdx\") pod \"redhat-marketplace-zcgrk\" (UID: \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\") " pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 06:59:51 crc kubenswrapper[4482]: I1125 06:59:51.578248 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 06:59:53 crc kubenswrapper[4482]: E1125 06:59:53.005057 4482 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96" Nov 25 06:59:53 crc kubenswrapper[4482]: E1125 06:59:53.005233 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2mkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-774b86978c-t6mdk_openstack-operators(f3eb6724-3ab3-4027-b8e6-3d90c403f13a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 06:59:54 crc kubenswrapper[4482]: E1125 06:59:54.689699 4482 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f" Nov 25 06:59:54 crc kubenswrapper[4482]: E1125 06:59:54.690123 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pwxsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-zdvcm_openstack-operators(4be124a3-1fa2-455c-834f-01e66fc326b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 06:59:55 crc kubenswrapper[4482]: E1125 06:59:55.062281 4482 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0" Nov 25 06:59:55 crc kubenswrapper[4482]: E1125 06:59:55.063433 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvlhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-5zxlt_openstack-operators(4a627cd2-d42b-4958-a41c-230dd8246061): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 06:59:55 crc kubenswrapper[4482]: E1125 06:59:55.602214 4482 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d" Nov 25 06:59:55 crc kubenswrapper[4482]: E1125 06:59:55.602631 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2hwxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-s25q8_openstack-operators(7059a6d7-9dca-499a-9110-e8dafb53935b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 06:59:56 crc kubenswrapper[4482]: I1125 06:59:56.052494 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" event={"ID":"2375b89e-398f-45d4-badc-1980cfcda4a1","Type":"ContainerStarted","Data":"1983925be4c5b314e70dfc5f4f37025f1a92be80e343a14b542879b9e83f4201"} Nov 25 06:59:56 crc kubenswrapper[4482]: I1125 06:59:56.099418 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch"] Nov 25 06:59:56 crc kubenswrapper[4482]: I1125 06:59:56.315831 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq"] Nov 25 06:59:56 crc kubenswrapper[4482]: I1125 06:59:56.362890 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcgrk"] Nov 25 06:59:56 crc kubenswrapper[4482]: I1125 06:59:56.380733 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wdff4"] Nov 25 06:59:57 crc kubenswrapper[4482]: I1125 06:59:57.061343 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" event={"ID":"9dbafcad-7706-4390-9745-238418d06f5c","Type":"ContainerStarted","Data":"0a5c174cc595bb4e16b69c8475a26cb7391b67e66693437f57fe83f6bfedb8bc"} Nov 25 06:59:57 crc kubenswrapper[4482]: I1125 06:59:57.062795 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" event={"ID":"3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a","Type":"ContainerStarted","Data":"b29a95a87a1294605d75d120af662060c297dc5140134bbe49d1c0429f58aad1"} Nov 25 06:59:57 crc kubenswrapper[4482]: I1125 06:59:57.064826 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" event={"ID":"1af05cb8-e059-49d7-91dc-17bfecaec8db","Type":"ContainerStarted","Data":"ecf3504ce636e98d632396fe10440e17216ee3f25454bdc21c806f6df0584169"} Nov 25 06:59:57 crc kubenswrapper[4482]: I1125 06:59:57.066058 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" event={"ID":"a2dcdd81-a863-4453-b1b6-e1824d5444b6","Type":"ContainerStarted","Data":"26d0a2435aa24cf9bc994091d4da1326235c9369561b8fcccb3fac21087db6e7"} Nov 25 06:59:57 crc kubenswrapper[4482]: I1125 06:59:57.067477 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" event={"ID":"4754fff5-c20f-42c5-8c10-bb9975919bf3","Type":"ContainerStarted","Data":"4b14250b497648f6feceb8b6e551b8c260869eec987ab73b40aa939fa27d792a"} Nov 25 06:59:57 crc kubenswrapper[4482]: I1125 06:59:57.068793 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" event={"ID":"ee690930-78a0-4f7d-be10-feee0cf523d7","Type":"ContainerStarted","Data":"e253836dc37d85b52872ee56c6b7783654b2535a69a67200e0577969fb2b6ca2"} Nov 25 06:59:57 crc kubenswrapper[4482]: W1125 06:59:57.669315 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod004e08bd_55ee_4702_88b6_69bd67a32610.slice/crio-f9a7cc7afe0c7e8f4f662e8f3e9b37202886527d90d7bac8a4b63f99751256c2 WatchSource:0}: Error finding container f9a7cc7afe0c7e8f4f662e8f3e9b37202886527d90d7bac8a4b63f99751256c2: Status 404 returned error can't find the container with id f9a7cc7afe0c7e8f4f662e8f3e9b37202886527d90d7bac8a4b63f99751256c2 Nov 25 06:59:57 crc kubenswrapper[4482]: W1125 06:59:57.681434 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod079e38f7_2ae4_43ed_a466_09930f83d081.slice/crio-10e012508e76a650c9df06fcd57448e6db1ab2f8f0d30f1c482d7d50926c8c00 WatchSource:0}: Error finding container 10e012508e76a650c9df06fcd57448e6db1ab2f8f0d30f1c482d7d50926c8c00: Status 404 returned error can't find the container with id 10e012508e76a650c9df06fcd57448e6db1ab2f8f0d30f1c482d7d50926c8c00 Nov 25 06:59:57 crc kubenswrapper[4482]: W1125 06:59:57.682546 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadd1b1a6_f427_464f_93f1_4f2f2cd92e43.slice/crio-b700e705f614e7d1906a37432e13254b5a3c3906af1c358a11e2efdc8201974b WatchSource:0}: Error finding container b700e705f614e7d1906a37432e13254b5a3c3906af1c358a11e2efdc8201974b: Status 404 returned error can't find the container with id b700e705f614e7d1906a37432e13254b5a3c3906af1c358a11e2efdc8201974b Nov 25 06:59:58 crc kubenswrapper[4482]: I1125 06:59:58.077764 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcgrk" event={"ID":"add1b1a6-f427-464f-93f1-4f2f2cd92e43","Type":"ContainerStarted","Data":"b700e705f614e7d1906a37432e13254b5a3c3906af1c358a11e2efdc8201974b"} Nov 25 06:59:58 crc kubenswrapper[4482]: I1125 06:59:58.079098 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" event={"ID":"004e08bd-55ee-4702-88b6-69bd67a32610","Type":"ContainerStarted","Data":"f9a7cc7afe0c7e8f4f662e8f3e9b37202886527d90d7bac8a4b63f99751256c2"} Nov 25 06:59:58 crc kubenswrapper[4482]: I1125 06:59:58.080242 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdff4" event={"ID":"079e38f7-2ae4-43ed-a466-09930f83d081","Type":"ContainerStarted","Data":"10e012508e76a650c9df06fcd57448e6db1ab2f8f0d30f1c482d7d50926c8c00"} Nov 25 06:59:58 crc kubenswrapper[4482]: I1125 06:59:58.082149 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" event={"ID":"4012508a-01a7-4e14-812e-7c70b350662a","Type":"ContainerStarted","Data":"4af88cb6b77bd9336f070a73a721621cb2ff8640147717b3b09bbfa9438605a4"} Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.099439 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" event={"ID":"3ec6220d-a590-404d-a427-98b94a3910c8","Type":"ContainerStarted","Data":"91b6c9a970b394c7002c806e5d03f8310112460dd8aaa192b5f661ac0e531499"} Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.101007 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" event={"ID":"42e69f15-3b24-4d83-840e-3633c1bb87a3","Type":"ContainerStarted","Data":"51df49ea9df8ffe25ee067829772be2a565f083b8a71ffcdee985d7f6216e156"} Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.143537 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz"] Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.144499 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.152388 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.152679 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.161264 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz"] Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.302723 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ef458b3-5100-4773-8b07-ed066b2b29ee-secret-volume\") pod \"collect-profiles-29400900-p7wjz\" (UID: \"4ef458b3-5100-4773-8b07-ed066b2b29ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.302911 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9qdr\" (UniqueName: \"kubernetes.io/projected/4ef458b3-5100-4773-8b07-ed066b2b29ee-kube-api-access-f9qdr\") pod \"collect-profiles-29400900-p7wjz\" (UID: \"4ef458b3-5100-4773-8b07-ed066b2b29ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.302957 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ef458b3-5100-4773-8b07-ed066b2b29ee-config-volume\") pod \"collect-profiles-29400900-p7wjz\" (UID: \"4ef458b3-5100-4773-8b07-ed066b2b29ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.404899 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9qdr\" (UniqueName: \"kubernetes.io/projected/4ef458b3-5100-4773-8b07-ed066b2b29ee-kube-api-access-f9qdr\") pod \"collect-profiles-29400900-p7wjz\" (UID: \"4ef458b3-5100-4773-8b07-ed066b2b29ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.404948 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ef458b3-5100-4773-8b07-ed066b2b29ee-config-volume\") pod \"collect-profiles-29400900-p7wjz\" (UID: \"4ef458b3-5100-4773-8b07-ed066b2b29ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.405042 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ef458b3-5100-4773-8b07-ed066b2b29ee-secret-volume\") pod \"collect-profiles-29400900-p7wjz\" (UID: \"4ef458b3-5100-4773-8b07-ed066b2b29ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.407376 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ef458b3-5100-4773-8b07-ed066b2b29ee-config-volume\") pod \"collect-profiles-29400900-p7wjz\" (UID: \"4ef458b3-5100-4773-8b07-ed066b2b29ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.425302 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ef458b3-5100-4773-8b07-ed066b2b29ee-secret-volume\") pod \"collect-profiles-29400900-p7wjz\" (UID: \"4ef458b3-5100-4773-8b07-ed066b2b29ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.425401 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9qdr\" (UniqueName: \"kubernetes.io/projected/4ef458b3-5100-4773-8b07-ed066b2b29ee-kube-api-access-f9qdr\") pod \"collect-profiles-29400900-p7wjz\" (UID: \"4ef458b3-5100-4773-8b07-ed066b2b29ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:00 crc kubenswrapper[4482]: I1125 07:00:00.469354 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.685478 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vvhb9"] Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.687009 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.699339 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vvhb9"] Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.847298 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36202d20-113a-4c20-8f4d-1f85dc2c0853-catalog-content\") pod \"certified-operators-vvhb9\" (UID: \"36202d20-113a-4c20-8f4d-1f85dc2c0853\") " pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.847358 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36202d20-113a-4c20-8f4d-1f85dc2c0853-utilities\") pod \"certified-operators-vvhb9\" (UID: \"36202d20-113a-4c20-8f4d-1f85dc2c0853\") " pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.847396 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwk77\" (UniqueName: \"kubernetes.io/projected/36202d20-113a-4c20-8f4d-1f85dc2c0853-kube-api-access-hwk77\") pod \"certified-operators-vvhb9\" (UID: \"36202d20-113a-4c20-8f4d-1f85dc2c0853\") " pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.948681 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36202d20-113a-4c20-8f4d-1f85dc2c0853-utilities\") pod \"certified-operators-vvhb9\" (UID: \"36202d20-113a-4c20-8f4d-1f85dc2c0853\") " pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.948813 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwk77\" (UniqueName: \"kubernetes.io/projected/36202d20-113a-4c20-8f4d-1f85dc2c0853-kube-api-access-hwk77\") pod \"certified-operators-vvhb9\" (UID: \"36202d20-113a-4c20-8f4d-1f85dc2c0853\") " pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.949211 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36202d20-113a-4c20-8f4d-1f85dc2c0853-utilities\") pod \"certified-operators-vvhb9\" (UID: \"36202d20-113a-4c20-8f4d-1f85dc2c0853\") " pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.949534 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36202d20-113a-4c20-8f4d-1f85dc2c0853-catalog-content\") pod \"certified-operators-vvhb9\" (UID: \"36202d20-113a-4c20-8f4d-1f85dc2c0853\") " pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.949856 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36202d20-113a-4c20-8f4d-1f85dc2c0853-catalog-content\") pod \"certified-operators-vvhb9\" (UID: \"36202d20-113a-4c20-8f4d-1f85dc2c0853\") " pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.968775 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwk77\" (UniqueName: \"kubernetes.io/projected/36202d20-113a-4c20-8f4d-1f85dc2c0853-kube-api-access-hwk77\") pod \"certified-operators-vvhb9\" (UID: \"36202d20-113a-4c20-8f4d-1f85dc2c0853\") " pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:02 crc kubenswrapper[4482]: I1125 07:00:02.998709 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:03 crc kubenswrapper[4482]: I1125 07:00:03.121398 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" event={"ID":"d0b2883e-6d53-465c-ba0c-45173ff59d4b","Type":"ContainerStarted","Data":"b0630f0752ed5e54a931a90a68391ae360781290937b624a85bc8c424cb1a609"} Nov 25 07:00:03 crc kubenswrapper[4482]: I1125 07:00:03.125554 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" event={"ID":"20c9d02f-1cbc-4c66-84ff-7cbf40bac507","Type":"ContainerStarted","Data":"64d463c2b03fccbe1f4b60e451e6c389fa29df6ca6188bfcf716aec91055ee23"} Nov 25 07:00:03 crc kubenswrapper[4482]: I1125 07:00:03.128670 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" event={"ID":"004e08bd-55ee-4702-88b6-69bd67a32610","Type":"ContainerStarted","Data":"929c213764e37eb1414b74117ecfbebc19322c88247ec1bd95b57fc3cc5ebe94"} Nov 25 07:00:03 crc kubenswrapper[4482]: I1125 07:00:03.128849 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 07:00:03 crc kubenswrapper[4482]: I1125 07:00:03.132232 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" event={"ID":"4a4c6e25-e4fb-49b7-b757-e82e153fdb24","Type":"ContainerStarted","Data":"763e7585469d4ac62b9482478155c10876dc0d5a7f06d910aa018a0b2b63bd3a"} Nov 25 07:00:04 crc kubenswrapper[4482]: I1125 07:00:04.145581 4482 generic.go:334] "Generic (PLEG): container finished" podID="079e38f7-2ae4-43ed-a466-09930f83d081" containerID="dd0dd460345456742eb8ccafe8fc48b97fc41c65527a528ffdb6d2cf6acc6faa" exitCode=0 Nov 25 07:00:04 crc kubenswrapper[4482]: I1125 07:00:04.147339 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdff4" event={"ID":"079e38f7-2ae4-43ed-a466-09930f83d081","Type":"ContainerDied","Data":"dd0dd460345456742eb8ccafe8fc48b97fc41c65527a528ffdb6d2cf6acc6faa"} Nov 25 07:00:04 crc kubenswrapper[4482]: I1125 07:00:04.180309 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" podStartSLOduration=25.180286671 podStartE2EDuration="25.180286671s" podCreationTimestamp="2025-11-25 06:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:00:03.158352571 +0000 UTC m=+777.646583830" watchObservedRunningTime="2025-11-25 07:00:04.180286671 +0000 UTC m=+778.668517931" Nov 25 07:00:05 crc kubenswrapper[4482]: I1125 07:00:05.055198 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz"] Nov 25 07:00:05 crc kubenswrapper[4482]: I1125 07:00:05.155093 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" event={"ID":"6ad00506-e452-4f9e-91d3-24b4da4a7104","Type":"ContainerStarted","Data":"b73b93dbf5efb76f2a5e9d0ad1289405716e666053d780a636ee9e55dd2ad5d6"} Nov 25 07:00:05 crc kubenswrapper[4482]: W1125 07:00:05.757791 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ef458b3_5100_4773_8b07_ed066b2b29ee.slice/crio-e5c89a1d6d5a40446c0404aa26f7da3821a67844e97a984eb649d06504240e9e WatchSource:0}: Error finding container e5c89a1d6d5a40446c0404aa26f7da3821a67844e97a984eb649d06504240e9e: Status 404 returned error can't find the container with id e5c89a1d6d5a40446c0404aa26f7da3821a67844e97a984eb649d06504240e9e Nov 25 07:00:06 crc kubenswrapper[4482]: I1125 07:00:06.171719 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" event={"ID":"4ef458b3-5100-4773-8b07-ed066b2b29ee","Type":"ContainerStarted","Data":"e5c89a1d6d5a40446c0404aa26f7da3821a67844e97a984eb649d06504240e9e"} Nov 25 07:00:06 crc kubenswrapper[4482]: I1125 07:00:06.173657 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" event={"ID":"4d7476c3-dd4a-4e22-a018-e9a93d53ece5","Type":"ContainerStarted","Data":"d50aa3ab08ace20a4bf09a1674bed2a916200e3f3205e7a51885e230e64010bd"} Nov 25 07:00:06 crc kubenswrapper[4482]: I1125 07:00:06.183545 4482 generic.go:334] "Generic (PLEG): container finished" podID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" containerID="9468e49d405889c86dec7ec0cd6ee3d0600ed1263630d3f7b2ef7b4606ed6280" exitCode=0 Nov 25 07:00:06 crc kubenswrapper[4482]: I1125 07:00:06.183571 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcgrk" event={"ID":"add1b1a6-f427-464f-93f1-4f2f2cd92e43","Type":"ContainerDied","Data":"9468e49d405889c86dec7ec0cd6ee3d0600ed1263630d3f7b2ef7b4606ed6280"} Nov 25 07:00:06 crc kubenswrapper[4482]: I1125 07:00:06.285917 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vvhb9"] Nov 25 07:00:07 crc kubenswrapper[4482]: E1125 07:00:07.193764 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" podUID="f3eb6724-3ab3-4027-b8e6-3d90c403f13a" Nov 25 07:00:07 crc kubenswrapper[4482]: E1125 07:00:07.193904 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" podUID="4a627cd2-d42b-4958-a41c-230dd8246061" Nov 25 07:00:07 crc kubenswrapper[4482]: E1125 07:00:07.194278 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" podUID="4be124a3-1fa2-455c-834f-01e66fc326b3" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.209255 4482 generic.go:334] "Generic (PLEG): container finished" podID="36202d20-113a-4c20-8f4d-1f85dc2c0853" containerID="28d1a91e874542e8979b1c63485f3df6eb351b8d34383af5b82c7aa7b4add10a" exitCode=0 Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.209364 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvhb9" event={"ID":"36202d20-113a-4c20-8f4d-1f85dc2c0853","Type":"ContainerDied","Data":"28d1a91e874542e8979b1c63485f3df6eb351b8d34383af5b82c7aa7b4add10a"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.209753 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvhb9" event={"ID":"36202d20-113a-4c20-8f4d-1f85dc2c0853","Type":"ContainerStarted","Data":"6b77db6a1b0d84c49979a449decdcfd7476cafbbce208170e473c5185d82ccad"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.225374 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" event={"ID":"337411b1-ff37-4370-ad36-415f816f5d07","Type":"ContainerStarted","Data":"d873bf5b8b26c1921674543f60f034820f1f6dd2f3e05fca296c94fabb08ac1f"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.249528 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" event={"ID":"4a627cd2-d42b-4958-a41c-230dd8246061","Type":"ContainerStarted","Data":"8952d2b45ff55493099755b77c61725491da5921bffbb00fdcf901d07b88f50f"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.263826 4482 generic.go:334] "Generic (PLEG): container finished" podID="4ef458b3-5100-4773-8b07-ed066b2b29ee" containerID="f98debb079f3ce73ab891e627ada742d3b36fe6821b27abc4678decacc1f480a" exitCode=0 Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.263913 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" event={"ID":"4ef458b3-5100-4773-8b07-ed066b2b29ee","Type":"ContainerDied","Data":"f98debb079f3ce73ab891e627ada742d3b36fe6821b27abc4678decacc1f480a"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.268941 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.276614 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" podStartSLOduration=5.184430893 podStartE2EDuration="28.276595806s" podCreationTimestamp="2025-11-25 06:59:39 +0000 UTC" firstStartedPulling="2025-11-25 06:59:41.395892297 +0000 UTC m=+755.884123556" lastFinishedPulling="2025-11-25 07:00:04.48805721 +0000 UTC m=+778.976288469" observedRunningTime="2025-11-25 07:00:07.269425603 +0000 UTC m=+781.757656862" watchObservedRunningTime="2025-11-25 07:00:07.276595806 +0000 UTC m=+781.764827065" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.278859 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" event={"ID":"a2dcdd81-a863-4453-b1b6-e1824d5444b6","Type":"ContainerStarted","Data":"5cae252e38184b8ca2c33825b10e61bcc071f72a5f8c12f3db317d36bc51d458"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.279237 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.286264 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" event={"ID":"4ab40028-48ce-48f7-bbd4-97b1bed0cf4c","Type":"ContainerStarted","Data":"cbe48659e3b993bbd97a48d9f917a0cc45a4edf7f304b49298bda2beafe4bd9a"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.296534 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.303527 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" event={"ID":"4a4c6e25-e4fb-49b7-b757-e82e153fdb24","Type":"ContainerStarted","Data":"d82338426fedf6527d4c73cb052249efdff7b0ece8fdf6e31085887d0795b288"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.304375 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.306158 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" podStartSLOduration=6.3813546389999996 podStartE2EDuration="29.306142841s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:41.400965648 +0000 UTC m=+755.889196907" lastFinishedPulling="2025-11-25 07:00:04.325753851 +0000 UTC m=+778.813985109" observedRunningTime="2025-11-25 07:00:07.299321495 +0000 UTC m=+781.787552755" watchObservedRunningTime="2025-11-25 07:00:07.306142841 +0000 UTC m=+781.794374100" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.310861 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.319115 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" event={"ID":"3a5cd60b-13ff-44ea-b256-1e05d03912e4","Type":"ContainerStarted","Data":"c82a7988d2b2abdd3088d986e7adc8af613611ca65413ba16ae1870e69c10f8d"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.320912 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" event={"ID":"3ec6220d-a590-404d-a427-98b94a3910c8","Type":"ContainerStarted","Data":"6d162986b58801c1991ebb00fee46dd48a5515fbc71aaba94f2ff09624e992f9"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.321608 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.327299 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.335596 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" event={"ID":"42e69f15-3b24-4d83-840e-3633c1bb87a3","Type":"ContainerStarted","Data":"1acbff55f06fd91386b369f3e9a68972baa5d11a766c589704d781eab69c59c5"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.336713 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.342371 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.348201 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.351159 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.375420 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" event={"ID":"6ad00506-e452-4f9e-91d3-24b4da4a7104","Type":"ContainerStarted","Data":"6a40cd200c5e4733debe69c521bd807712b2c54602ff1cc3a6780f6fcdb6f366"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.375955 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.393295 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" podStartSLOduration=3.024496971 podStartE2EDuration="29.393283818s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:39.868446232 +0000 UTC m=+754.356677482" lastFinishedPulling="2025-11-25 07:00:06.237233069 +0000 UTC m=+780.725464329" observedRunningTime="2025-11-25 07:00:07.391745808 +0000 UTC m=+781.879977068" watchObservedRunningTime="2025-11-25 07:00:07.393283818 +0000 UTC m=+781.881515067" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.418660 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" event={"ID":"4be124a3-1fa2-455c-834f-01e66fc326b3","Type":"ContainerStarted","Data":"de3035bc02911c6b5f13fc3db680b4cc72960a60f14e49dfd379c1989572232b"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.446033 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" podStartSLOduration=3.807454562 podStartE2EDuration="29.446015497s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:40.506747873 +0000 UTC m=+754.994979132" lastFinishedPulling="2025-11-25 07:00:06.145308808 +0000 UTC m=+780.633540067" observedRunningTime="2025-11-25 07:00:07.433574691 +0000 UTC m=+781.921805949" watchObservedRunningTime="2025-11-25 07:00:07.446015497 +0000 UTC m=+781.934246756" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.466804 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.480633 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.499303 4482 generic.go:334] "Generic (PLEG): container finished" podID="079e38f7-2ae4-43ed-a466-09930f83d081" containerID="4753ffd54d5f4767d5640ce72daa38dd09eae28e67592f2d36016c0388d3060d" exitCode=0 Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.499378 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdff4" event={"ID":"079e38f7-2ae4-43ed-a466-09930f83d081","Type":"ContainerDied","Data":"4753ffd54d5f4767d5640ce72daa38dd09eae28e67592f2d36016c0388d3060d"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.519953 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" podStartSLOduration=3.7910959220000002 podStartE2EDuration="29.519930318s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:40.507437584 +0000 UTC m=+754.995668842" lastFinishedPulling="2025-11-25 07:00:06.236271978 +0000 UTC m=+780.724503238" observedRunningTime="2025-11-25 07:00:07.50804566 +0000 UTC m=+781.996276919" watchObservedRunningTime="2025-11-25 07:00:07.519930318 +0000 UTC m=+782.008161607" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.521228 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" podStartSLOduration=4.168302952 podStartE2EDuration="29.521214418s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:40.792882387 +0000 UTC m=+755.281113646" lastFinishedPulling="2025-11-25 07:00:06.145793852 +0000 UTC m=+780.634025112" observedRunningTime="2025-11-25 07:00:07.466986278 +0000 UTC m=+781.955217547" watchObservedRunningTime="2025-11-25 07:00:07.521214418 +0000 UTC m=+782.009445676" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.530211 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" event={"ID":"9dbafcad-7706-4390-9745-238418d06f5c","Type":"ContainerStarted","Data":"75204773da7e07292cc75a66848c1f66edf9c5f0e2f01e1e37b7b7daa7953130"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.530859 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.544597 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.566236 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" podStartSLOduration=4.08832244 podStartE2EDuration="29.56622208s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:40.774375087 +0000 UTC m=+755.262606346" lastFinishedPulling="2025-11-25 07:00:06.252274727 +0000 UTC m=+780.740505986" observedRunningTime="2025-11-25 07:00:07.554133397 +0000 UTC m=+782.042364676" watchObservedRunningTime="2025-11-25 07:00:07.56622208 +0000 UTC m=+782.054453338" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.569709 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" event={"ID":"3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a","Type":"ContainerStarted","Data":"84df5108482f125f9caad2b73a50f3aaf1b99a926963f523c545916d5ea06d7c"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.570099 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.591124 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" podStartSLOduration=4.875321388 podStartE2EDuration="29.591107589s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:41.33239582 +0000 UTC m=+755.820627080" lastFinishedPulling="2025-11-25 07:00:06.048182022 +0000 UTC m=+780.536413281" observedRunningTime="2025-11-25 07:00:07.576432853 +0000 UTC m=+782.064664112" watchObservedRunningTime="2025-11-25 07:00:07.591107589 +0000 UTC m=+782.079338848" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.598479 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.649524 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" event={"ID":"1af05cb8-e059-49d7-91dc-17bfecaec8db","Type":"ContainerStarted","Data":"2d87194e9bb9775e4123376f4205a00803e961d528ae055eeb9a5f7928019622"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.651815 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.675995 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" event={"ID":"f3eb6724-3ab3-4027-b8e6-3d90c403f13a","Type":"ContainerStarted","Data":"fa968a312497363d99986dfd3a748978841b2fbda4a74f6c816bcc7609472c68"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.681655 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.691914 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" event={"ID":"ee690930-78a0-4f7d-be10-feee0cf523d7","Type":"ContainerStarted","Data":"87fd78bd424794592976dfa77489b3a1fffcd70265b55d2eaa5f2a5ce9cc5952"} Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.701636 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" podStartSLOduration=4.380462631 podStartE2EDuration="29.701625682s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:40.779754205 +0000 UTC m=+755.267985465" lastFinishedPulling="2025-11-25 07:00:06.100917257 +0000 UTC m=+780.589148516" observedRunningTime="2025-11-25 07:00:07.697527529 +0000 UTC m=+782.185758788" watchObservedRunningTime="2025-11-25 07:00:07.701625682 +0000 UTC m=+782.189856942" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.834189 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" podStartSLOduration=4.193122978 podStartE2EDuration="29.834146263s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:40.517810543 +0000 UTC m=+755.006041802" lastFinishedPulling="2025-11-25 07:00:06.158833827 +0000 UTC m=+780.647065087" observedRunningTime="2025-11-25 07:00:07.745581783 +0000 UTC m=+782.233813042" watchObservedRunningTime="2025-11-25 07:00:07.834146263 +0000 UTC m=+782.322377522" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.835615 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" podStartSLOduration=4.386977018 podStartE2EDuration="29.83560364s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:40.803355725 +0000 UTC m=+755.291586984" lastFinishedPulling="2025-11-25 07:00:06.251982347 +0000 UTC m=+780.740213606" observedRunningTime="2025-11-25 07:00:07.832832027 +0000 UTC m=+782.321063275" watchObservedRunningTime="2025-11-25 07:00:07.83560364 +0000 UTC m=+782.323834898" Nov 25 07:00:07 crc kubenswrapper[4482]: E1125 07:00:07.851514 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" podUID="7059a6d7-9dca-499a-9110-e8dafb53935b" Nov 25 07:00:07 crc kubenswrapper[4482]: I1125 07:00:07.948827 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" podStartSLOduration=4.991538783 podStartE2EDuration="29.948799691s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:41.326773034 +0000 UTC m=+755.815004292" lastFinishedPulling="2025-11-25 07:00:06.284033941 +0000 UTC m=+780.772265200" observedRunningTime="2025-11-25 07:00:07.942880775 +0000 UTC m=+782.431112034" watchObservedRunningTime="2025-11-25 07:00:07.948799691 +0000 UTC m=+782.437030949" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.710715 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" event={"ID":"4ab40028-48ce-48f7-bbd4-97b1bed0cf4c","Type":"ContainerStarted","Data":"a7e79a93d395af39d87eeec865965ab769be2cf5b5537fe79c205dc11db24a2c"} Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.710824 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.728617 4482 generic.go:334] "Generic (PLEG): container finished" podID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" containerID="2d00348d694a2df0dcbfc72ac9c229769060e8828b566747ea99eaf8db7a903e" exitCode=0 Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.728678 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcgrk" event={"ID":"add1b1a6-f427-464f-93f1-4f2f2cd92e43","Type":"ContainerDied","Data":"2d00348d694a2df0dcbfc72ac9c229769060e8828b566747ea99eaf8db7a903e"} Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.733643 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" event={"ID":"4d7476c3-dd4a-4e22-a018-e9a93d53ece5","Type":"ContainerStarted","Data":"bcc2c62bcd00447a921158988e0fd6a475c2db00100f178cae3032c799f2dc3d"} Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.737065 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" event={"ID":"4754fff5-c20f-42c5-8c10-bb9975919bf3","Type":"ContainerStarted","Data":"91ccf34aa85cfe7d2ca49499caf7fc3238a77d060aae5b42739a0ffe5ea7d29a"} Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.737563 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.740102 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.751088 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" event={"ID":"ee690930-78a0-4f7d-be10-feee0cf523d7","Type":"ContainerStarted","Data":"841f5e2462259fd3c9d64625d6b1f62bda0a1e5ec7a398c60ce61750a389d4fd"} Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.751347 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.752558 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" podStartSLOduration=7.769219085 podStartE2EDuration="30.752547544s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:41.389986487 +0000 UTC m=+755.878217736" lastFinishedPulling="2025-11-25 07:00:04.373314936 +0000 UTC m=+778.861546195" observedRunningTime="2025-11-25 07:00:08.750893387 +0000 UTC m=+783.239124646" watchObservedRunningTime="2025-11-25 07:00:08.752547544 +0000 UTC m=+783.240778803" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.765555 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" event={"ID":"2375b89e-398f-45d4-badc-1980cfcda4a1","Type":"ContainerStarted","Data":"c72aa8b220e4025b531d1364523e0d0f6d816d84aa8bad1f0ae75fbfbf6564bb"} Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.773496 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" podStartSLOduration=4.313466889 podStartE2EDuration="30.773485243s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:39.800589899 +0000 UTC m=+754.288821158" lastFinishedPulling="2025-11-25 07:00:06.260608253 +0000 UTC m=+780.748839512" observedRunningTime="2025-11-25 07:00:08.767878888 +0000 UTC m=+783.256110137" watchObservedRunningTime="2025-11-25 07:00:08.773485243 +0000 UTC m=+783.261716502" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.775695 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" event={"ID":"4a627cd2-d42b-4958-a41c-230dd8246061","Type":"ContainerStarted","Data":"1dc9279bf9e79ba53fdb68995812068144fddfb6804664ee56c37679d5889bc4"} Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.776128 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.781485 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" event={"ID":"7059a6d7-9dca-499a-9110-e8dafb53935b","Type":"ContainerStarted","Data":"ece18bc67ff846136343ad13f716495421d525f89939fff8c1fb1e8a17d64b5d"} Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.795414 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" event={"ID":"d0b2883e-6d53-465c-ba0c-45173ff59d4b","Type":"ContainerStarted","Data":"e82ddac295fd1c0dc962b8c95ca20833b67ab5718fdf1c63b58fd21f24a7d313"} Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.830675 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" event={"ID":"4012508a-01a7-4e14-812e-7c70b350662a","Type":"ContainerStarted","Data":"d3050508fd1e050544781ac7fa5eaa2373c8800fa505e47ba5ee1f0c25827394"} Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.831273 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.842998 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" podStartSLOduration=22.691427552 podStartE2EDuration="30.842976848s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:56.205233893 +0000 UTC m=+770.693465142" lastFinishedPulling="2025-11-25 07:00:04.356783179 +0000 UTC m=+778.845014438" observedRunningTime="2025-11-25 07:00:08.840463501 +0000 UTC m=+783.328694760" watchObservedRunningTime="2025-11-25 07:00:08.842976848 +0000 UTC m=+783.331208097" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.843427 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.858982 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" event={"ID":"3a5cd60b-13ff-44ea-b256-1e05d03912e4","Type":"ContainerStarted","Data":"4367469cafbdc0304e33bf5f4d27f275a29d772e4184958a4f2b6b3ee36570ea"} Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.859009 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.906762 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" podStartSLOduration=4.207431214 podStartE2EDuration="30.906745969s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:41.368358897 +0000 UTC m=+755.856590156" lastFinishedPulling="2025-11-25 07:00:08.067673651 +0000 UTC m=+782.555904911" observedRunningTime="2025-11-25 07:00:08.900611106 +0000 UTC m=+783.388842375" watchObservedRunningTime="2025-11-25 07:00:08.906745969 +0000 UTC m=+783.394977227" Nov 25 07:00:08 crc kubenswrapper[4482]: I1125 07:00:08.925723 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" podStartSLOduration=5.398289409 podStartE2EDuration="30.925708575s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:40.803582572 +0000 UTC m=+755.291813831" lastFinishedPulling="2025-11-25 07:00:06.331001738 +0000 UTC m=+780.819232997" observedRunningTime="2025-11-25 07:00:08.918493869 +0000 UTC m=+783.406725118" watchObservedRunningTime="2025-11-25 07:00:08.925708575 +0000 UTC m=+783.413939835" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.062080 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.087535 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" podStartSLOduration=8.126076834 podStartE2EDuration="31.08751625s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:41.411517855 +0000 UTC m=+755.899749114" lastFinishedPulling="2025-11-25 07:00:04.372957281 +0000 UTC m=+778.861188530" observedRunningTime="2025-11-25 07:00:08.94201157 +0000 UTC m=+783.430242829" watchObservedRunningTime="2025-11-25 07:00:09.08751625 +0000 UTC m=+783.575747499" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.469929 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.526848 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ef458b3-5100-4773-8b07-ed066b2b29ee-config-volume\") pod \"4ef458b3-5100-4773-8b07-ed066b2b29ee\" (UID: \"4ef458b3-5100-4773-8b07-ed066b2b29ee\") " Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.526991 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ef458b3-5100-4773-8b07-ed066b2b29ee-secret-volume\") pod \"4ef458b3-5100-4773-8b07-ed066b2b29ee\" (UID: \"4ef458b3-5100-4773-8b07-ed066b2b29ee\") " Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.527062 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9qdr\" (UniqueName: \"kubernetes.io/projected/4ef458b3-5100-4773-8b07-ed066b2b29ee-kube-api-access-f9qdr\") pod \"4ef458b3-5100-4773-8b07-ed066b2b29ee\" (UID: \"4ef458b3-5100-4773-8b07-ed066b2b29ee\") " Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.527419 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef458b3-5100-4773-8b07-ed066b2b29ee-config-volume" (OuterVolumeSpecName: "config-volume") pod "4ef458b3-5100-4773-8b07-ed066b2b29ee" (UID: "4ef458b3-5100-4773-8b07-ed066b2b29ee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.532754 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef458b3-5100-4773-8b07-ed066b2b29ee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4ef458b3-5100-4773-8b07-ed066b2b29ee" (UID: "4ef458b3-5100-4773-8b07-ed066b2b29ee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.535750 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef458b3-5100-4773-8b07-ed066b2b29ee-kube-api-access-f9qdr" (OuterVolumeSpecName: "kube-api-access-f9qdr") pod "4ef458b3-5100-4773-8b07-ed066b2b29ee" (UID: "4ef458b3-5100-4773-8b07-ed066b2b29ee"). InnerVolumeSpecName "kube-api-access-f9qdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.628647 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9qdr\" (UniqueName: \"kubernetes.io/projected/4ef458b3-5100-4773-8b07-ed066b2b29ee-kube-api-access-f9qdr\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.628678 4482 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ef458b3-5100-4773-8b07-ed066b2b29ee-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.628689 4482 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ef458b3-5100-4773-8b07-ed066b2b29ee-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.865666 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcgrk" event={"ID":"add1b1a6-f427-464f-93f1-4f2f2cd92e43","Type":"ContainerStarted","Data":"8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301"} Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.867876 4482 generic.go:334] "Generic (PLEG): container finished" podID="36202d20-113a-4c20-8f4d-1f85dc2c0853" containerID="35c53138b1156e08b4df83bc1cdde2a44c80d0e432c7113e03d8933b779d7c49" exitCode=0 Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.867939 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvhb9" event={"ID":"36202d20-113a-4c20-8f4d-1f85dc2c0853","Type":"ContainerDied","Data":"35c53138b1156e08b4df83bc1cdde2a44c80d0e432c7113e03d8933b779d7c49"} Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.871355 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdff4" event={"ID":"079e38f7-2ae4-43ed-a466-09930f83d081","Type":"ContainerStarted","Data":"40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c"} Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.874252 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.874274 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz" event={"ID":"4ef458b3-5100-4773-8b07-ed066b2b29ee","Type":"ContainerDied","Data":"e5c89a1d6d5a40446c0404aa26f7da3821a67844e97a984eb649d06504240e9e"} Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.874321 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5c89a1d6d5a40446c0404aa26f7da3821a67844e97a984eb649d06504240e9e" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.875789 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" event={"ID":"7059a6d7-9dca-499a-9110-e8dafb53935b","Type":"ContainerStarted","Data":"96776e53576c5690dfe87d539bcd9673af78b98ad30ee2b3622919d65dd241d8"} Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.876128 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.878074 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" event={"ID":"4be124a3-1fa2-455c-834f-01e66fc326b3","Type":"ContainerStarted","Data":"f162cf30d632ace23a8c2ddb8a5c8df06ab3e974b8684d75d48f6ff873b63bf2"} Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.878216 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.880151 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" event={"ID":"20c9d02f-1cbc-4c66-84ff-7cbf40bac507","Type":"ContainerStarted","Data":"80bc8aef27110fe123b634235c0f1e4771fd93f4eb4fac1600d56f3fc901269e"} Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.880282 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.884319 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" event={"ID":"f3eb6724-3ab3-4027-b8e6-3d90c403f13a","Type":"ContainerStarted","Data":"e613add272e2b07f21b51e7bfb49ea451dad3418f2af376b2e293c77c216eec9"} Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.884631 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.886220 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.902060 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zcgrk" podStartSLOduration=15.816661788 podStartE2EDuration="18.902048097s" podCreationTimestamp="2025-11-25 06:59:51 +0000 UTC" firstStartedPulling="2025-11-25 07:00:06.236651773 +0000 UTC m=+780.724883033" lastFinishedPulling="2025-11-25 07:00:09.322038083 +0000 UTC m=+783.810269342" observedRunningTime="2025-11-25 07:00:09.897204699 +0000 UTC m=+784.385435969" watchObservedRunningTime="2025-11-25 07:00:09.902048097 +0000 UTC m=+784.390279357" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.923259 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" podStartSLOduration=5.01041168 podStartE2EDuration="31.923236019s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:41.363704495 +0000 UTC m=+755.851935744" lastFinishedPulling="2025-11-25 07:00:08.276528823 +0000 UTC m=+782.764760083" observedRunningTime="2025-11-25 07:00:09.915937835 +0000 UTC m=+784.404169094" watchObservedRunningTime="2025-11-25 07:00:09.923236019 +0000 UTC m=+784.411467267" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.954004 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" podStartSLOduration=3.098945727 podStartE2EDuration="31.953987643s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:39.829635599 +0000 UTC m=+754.317866858" lastFinishedPulling="2025-11-25 07:00:08.684677515 +0000 UTC m=+783.172908774" observedRunningTime="2025-11-25 07:00:09.950548101 +0000 UTC m=+784.438779360" watchObservedRunningTime="2025-11-25 07:00:09.953987643 +0000 UTC m=+784.442218892" Nov 25 07:00:09 crc kubenswrapper[4482]: I1125 07:00:09.997909 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" podStartSLOduration=3.861874182 podStartE2EDuration="31.997890533s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:41.296422144 +0000 UTC m=+755.784653403" lastFinishedPulling="2025-11-25 07:00:09.432438505 +0000 UTC m=+783.920669754" observedRunningTime="2025-11-25 07:00:09.96913499 +0000 UTC m=+784.457366249" watchObservedRunningTime="2025-11-25 07:00:09.997890533 +0000 UTC m=+784.486121792" Nov 25 07:00:10 crc kubenswrapper[4482]: I1125 07:00:10.018381 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wdff4" podStartSLOduration=20.032223297 podStartE2EDuration="24.018372134s" podCreationTimestamp="2025-11-25 06:59:46 +0000 UTC" firstStartedPulling="2025-11-25 07:00:04.290363403 +0000 UTC m=+778.778594663" lastFinishedPulling="2025-11-25 07:00:08.276512241 +0000 UTC m=+782.764743500" observedRunningTime="2025-11-25 07:00:10.002267683 +0000 UTC m=+784.490498941" watchObservedRunningTime="2025-11-25 07:00:10.018372134 +0000 UTC m=+784.506603383" Nov 25 07:00:10 crc kubenswrapper[4482]: I1125 07:00:10.527439 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" podStartSLOduration=4.479129349 podStartE2EDuration="32.527408309s" podCreationTimestamp="2025-11-25 06:59:38 +0000 UTC" firstStartedPulling="2025-11-25 06:59:40.450648909 +0000 UTC m=+754.938880168" lastFinishedPulling="2025-11-25 07:00:08.498927879 +0000 UTC m=+782.987159128" observedRunningTime="2025-11-25 07:00:10.027630582 +0000 UTC m=+784.515861871" watchObservedRunningTime="2025-11-25 07:00:10.527408309 +0000 UTC m=+785.015639568" Nov 25 07:00:10 crc kubenswrapper[4482]: I1125 07:00:10.896744 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvhb9" event={"ID":"36202d20-113a-4c20-8f4d-1f85dc2c0853","Type":"ContainerStarted","Data":"3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa"} Nov 25 07:00:10 crc kubenswrapper[4482]: I1125 07:00:10.912099 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vvhb9" podStartSLOduration=5.732229329 podStartE2EDuration="8.912082903s" podCreationTimestamp="2025-11-25 07:00:02 +0000 UTC" firstStartedPulling="2025-11-25 07:00:07.213899919 +0000 UTC m=+781.702131178" lastFinishedPulling="2025-11-25 07:00:10.393753492 +0000 UTC m=+784.881984752" observedRunningTime="2025-11-25 07:00:10.910531278 +0000 UTC m=+785.398762537" watchObservedRunningTime="2025-11-25 07:00:10.912082903 +0000 UTC m=+785.400314162" Nov 25 07:00:11 crc kubenswrapper[4482]: I1125 07:00:11.578616 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 07:00:11 crc kubenswrapper[4482]: I1125 07:00:11.578675 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 07:00:12 crc kubenswrapper[4482]: I1125 07:00:12.615760 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zcgrk" podUID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" containerName="registry-server" probeResult="failure" output=< Nov 25 07:00:12 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 07:00:12 crc kubenswrapper[4482]: > Nov 25 07:00:12 crc kubenswrapper[4482]: I1125 07:00:12.725618 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 07:00:12 crc kubenswrapper[4482]: I1125 07:00:12.999767 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:13 crc kubenswrapper[4482]: I1125 07:00:13.000096 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:13 crc kubenswrapper[4482]: I1125 07:00:13.038839 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:13 crc kubenswrapper[4482]: I1125 07:00:13.336277 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 07:00:16 crc kubenswrapper[4482]: I1125 07:00:16.924202 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wdff4" Nov 25 07:00:16 crc kubenswrapper[4482]: I1125 07:00:16.924554 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wdff4" Nov 25 07:00:16 crc kubenswrapper[4482]: I1125 07:00:16.957051 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wdff4" Nov 25 07:00:16 crc kubenswrapper[4482]: I1125 07:00:16.997318 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wdff4" Nov 25 07:00:18 crc kubenswrapper[4482]: I1125 07:00:18.028476 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wdff4"] Nov 25 07:00:18 crc kubenswrapper[4482]: I1125 07:00:18.754918 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" Nov 25 07:00:18 crc kubenswrapper[4482]: I1125 07:00:18.953394 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wdff4" podUID="079e38f7-2ae4-43ed-a466-09930f83d081" containerName="registry-server" containerID="cri-o://40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c" gracePeriod=2 Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.079802 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.149179 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.354116 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wdff4" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.368394 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079e38f7-2ae4-43ed-a466-09930f83d081-catalog-content\") pod \"079e38f7-2ae4-43ed-a466-09930f83d081\" (UID: \"079e38f7-2ae4-43ed-a466-09930f83d081\") " Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.368482 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079e38f7-2ae4-43ed-a466-09930f83d081-utilities\") pod \"079e38f7-2ae4-43ed-a466-09930f83d081\" (UID: \"079e38f7-2ae4-43ed-a466-09930f83d081\") " Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.368523 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwcw2\" (UniqueName: \"kubernetes.io/projected/079e38f7-2ae4-43ed-a466-09930f83d081-kube-api-access-wwcw2\") pod \"079e38f7-2ae4-43ed-a466-09930f83d081\" (UID: \"079e38f7-2ae4-43ed-a466-09930f83d081\") " Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.369219 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/079e38f7-2ae4-43ed-a466-09930f83d081-utilities" (OuterVolumeSpecName: "utilities") pod "079e38f7-2ae4-43ed-a466-09930f83d081" (UID: "079e38f7-2ae4-43ed-a466-09930f83d081"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.376292 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/079e38f7-2ae4-43ed-a466-09930f83d081-kube-api-access-wwcw2" (OuterVolumeSpecName: "kube-api-access-wwcw2") pod "079e38f7-2ae4-43ed-a466-09930f83d081" (UID: "079e38f7-2ae4-43ed-a466-09930f83d081"). InnerVolumeSpecName "kube-api-access-wwcw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.411322 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/079e38f7-2ae4-43ed-a466-09930f83d081-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "079e38f7-2ae4-43ed-a466-09930f83d081" (UID: "079e38f7-2ae4-43ed-a466-09930f83d081"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.471239 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/079e38f7-2ae4-43ed-a466-09930f83d081-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.471279 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/079e38f7-2ae4-43ed-a466-09930f83d081-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.471292 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwcw2\" (UniqueName: \"kubernetes.io/projected/079e38f7-2ae4-43ed-a466-09930f83d081-kube-api-access-wwcw2\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.472448 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.542804 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.543235 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.966892 4482 generic.go:334] "Generic (PLEG): container finished" podID="079e38f7-2ae4-43ed-a466-09930f83d081" containerID="40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c" exitCode=0 Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.966978 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wdff4" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.966965 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdff4" event={"ID":"079e38f7-2ae4-43ed-a466-09930f83d081","Type":"ContainerDied","Data":"40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c"} Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.967313 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wdff4" event={"ID":"079e38f7-2ae4-43ed-a466-09930f83d081","Type":"ContainerDied","Data":"10e012508e76a650c9df06fcd57448e6db1ab2f8f0d30f1c482d7d50926c8c00"} Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.967358 4482 scope.go:117] "RemoveContainer" containerID="40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.985153 4482 scope.go:117] "RemoveContainer" containerID="4753ffd54d5f4767d5640ce72daa38dd09eae28e67592f2d36016c0388d3060d" Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.985586 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wdff4"] Nov 25 07:00:19 crc kubenswrapper[4482]: I1125 07:00:19.993302 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wdff4"] Nov 25 07:00:20 crc kubenswrapper[4482]: I1125 07:00:20.017379 4482 scope.go:117] "RemoveContainer" containerID="dd0dd460345456742eb8ccafe8fc48b97fc41c65527a528ffdb6d2cf6acc6faa" Nov 25 07:00:20 crc kubenswrapper[4482]: I1125 07:00:20.043962 4482 scope.go:117] "RemoveContainer" containerID="40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c" Nov 25 07:00:20 crc kubenswrapper[4482]: E1125 07:00:20.044636 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c\": container with ID starting with 40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c not found: ID does not exist" containerID="40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c" Nov 25 07:00:20 crc kubenswrapper[4482]: I1125 07:00:20.044669 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c"} err="failed to get container status \"40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c\": rpc error: code = NotFound desc = could not find container \"40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c\": container with ID starting with 40727f899e9ec061ae96f2f88f8a3d037612fe8bc4e0c909473d646d62fd121c not found: ID does not exist" Nov 25 07:00:20 crc kubenswrapper[4482]: I1125 07:00:20.044690 4482 scope.go:117] "RemoveContainer" containerID="4753ffd54d5f4767d5640ce72daa38dd09eae28e67592f2d36016c0388d3060d" Nov 25 07:00:20 crc kubenswrapper[4482]: E1125 07:00:20.045027 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4753ffd54d5f4767d5640ce72daa38dd09eae28e67592f2d36016c0388d3060d\": container with ID starting with 4753ffd54d5f4767d5640ce72daa38dd09eae28e67592f2d36016c0388d3060d not found: ID does not exist" containerID="4753ffd54d5f4767d5640ce72daa38dd09eae28e67592f2d36016c0388d3060d" Nov 25 07:00:20 crc kubenswrapper[4482]: I1125 07:00:20.045066 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4753ffd54d5f4767d5640ce72daa38dd09eae28e67592f2d36016c0388d3060d"} err="failed to get container status \"4753ffd54d5f4767d5640ce72daa38dd09eae28e67592f2d36016c0388d3060d\": rpc error: code = NotFound desc = could not find container \"4753ffd54d5f4767d5640ce72daa38dd09eae28e67592f2d36016c0388d3060d\": container with ID starting with 4753ffd54d5f4767d5640ce72daa38dd09eae28e67592f2d36016c0388d3060d not found: ID does not exist" Nov 25 07:00:20 crc kubenswrapper[4482]: I1125 07:00:20.045080 4482 scope.go:117] "RemoveContainer" containerID="dd0dd460345456742eb8ccafe8fc48b97fc41c65527a528ffdb6d2cf6acc6faa" Nov 25 07:00:20 crc kubenswrapper[4482]: E1125 07:00:20.045426 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd0dd460345456742eb8ccafe8fc48b97fc41c65527a528ffdb6d2cf6acc6faa\": container with ID starting with dd0dd460345456742eb8ccafe8fc48b97fc41c65527a528ffdb6d2cf6acc6faa not found: ID does not exist" containerID="dd0dd460345456742eb8ccafe8fc48b97fc41c65527a528ffdb6d2cf6acc6faa" Nov 25 07:00:20 crc kubenswrapper[4482]: I1125 07:00:20.045465 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd0dd460345456742eb8ccafe8fc48b97fc41c65527a528ffdb6d2cf6acc6faa"} err="failed to get container status \"dd0dd460345456742eb8ccafe8fc48b97fc41c65527a528ffdb6d2cf6acc6faa\": rpc error: code = NotFound desc = could not find container \"dd0dd460345456742eb8ccafe8fc48b97fc41c65527a528ffdb6d2cf6acc6faa\": container with ID starting with dd0dd460345456742eb8ccafe8fc48b97fc41c65527a528ffdb6d2cf6acc6faa not found: ID does not exist" Nov 25 07:00:20 crc kubenswrapper[4482]: I1125 07:00:20.379721 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 07:00:21 crc kubenswrapper[4482]: I1125 07:00:21.638540 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 07:00:21 crc kubenswrapper[4482]: I1125 07:00:21.705004 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 07:00:21 crc kubenswrapper[4482]: I1125 07:00:21.840945 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="079e38f7-2ae4-43ed-a466-09930f83d081" path="/var/lib/kubelet/pods/079e38f7-2ae4-43ed-a466-09930f83d081/volumes" Nov 25 07:00:22 crc kubenswrapper[4482]: I1125 07:00:22.428384 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcgrk"] Nov 25 07:00:22 crc kubenswrapper[4482]: I1125 07:00:22.989254 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zcgrk" podUID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" containerName="registry-server" containerID="cri-o://8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301" gracePeriod=2 Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.028739 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.326842 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.333386 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add1b1a6-f427-464f-93f1-4f2f2cd92e43-catalog-content\") pod \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\" (UID: \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\") " Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.333447 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add1b1a6-f427-464f-93f1-4f2f2cd92e43-utilities\") pod \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\" (UID: \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\") " Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.333466 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kmdx\" (UniqueName: \"kubernetes.io/projected/add1b1a6-f427-464f-93f1-4f2f2cd92e43-kube-api-access-7kmdx\") pod \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\" (UID: \"add1b1a6-f427-464f-93f1-4f2f2cd92e43\") " Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.334825 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add1b1a6-f427-464f-93f1-4f2f2cd92e43-utilities" (OuterVolumeSpecName: "utilities") pod "add1b1a6-f427-464f-93f1-4f2f2cd92e43" (UID: "add1b1a6-f427-464f-93f1-4f2f2cd92e43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.338811 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add1b1a6-f427-464f-93f1-4f2f2cd92e43-kube-api-access-7kmdx" (OuterVolumeSpecName: "kube-api-access-7kmdx") pod "add1b1a6-f427-464f-93f1-4f2f2cd92e43" (UID: "add1b1a6-f427-464f-93f1-4f2f2cd92e43"). InnerVolumeSpecName "kube-api-access-7kmdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.348436 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add1b1a6-f427-464f-93f1-4f2f2cd92e43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "add1b1a6-f427-464f-93f1-4f2f2cd92e43" (UID: "add1b1a6-f427-464f-93f1-4f2f2cd92e43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.434358 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add1b1a6-f427-464f-93f1-4f2f2cd92e43-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.434386 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add1b1a6-f427-464f-93f1-4f2f2cd92e43-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.434396 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kmdx\" (UniqueName: \"kubernetes.io/projected/add1b1a6-f427-464f-93f1-4f2f2cd92e43-kube-api-access-7kmdx\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.998333 4482 generic.go:334] "Generic (PLEG): container finished" podID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" containerID="8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301" exitCode=0 Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.998549 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcgrk" event={"ID":"add1b1a6-f427-464f-93f1-4f2f2cd92e43","Type":"ContainerDied","Data":"8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301"} Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.998995 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcgrk" event={"ID":"add1b1a6-f427-464f-93f1-4f2f2cd92e43","Type":"ContainerDied","Data":"b700e705f614e7d1906a37432e13254b5a3c3906af1c358a11e2efdc8201974b"} Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.999071 4482 scope.go:117] "RemoveContainer" containerID="8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301" Nov 25 07:00:23 crc kubenswrapper[4482]: I1125 07:00:23.998695 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcgrk" Nov 25 07:00:24 crc kubenswrapper[4482]: I1125 07:00:24.019040 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcgrk"] Nov 25 07:00:24 crc kubenswrapper[4482]: I1125 07:00:24.022739 4482 scope.go:117] "RemoveContainer" containerID="2d00348d694a2df0dcbfc72ac9c229769060e8828b566747ea99eaf8db7a903e" Nov 25 07:00:24 crc kubenswrapper[4482]: I1125 07:00:24.023875 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcgrk"] Nov 25 07:00:24 crc kubenswrapper[4482]: I1125 07:00:24.040904 4482 scope.go:117] "RemoveContainer" containerID="9468e49d405889c86dec7ec0cd6ee3d0600ed1263630d3f7b2ef7b4606ed6280" Nov 25 07:00:24 crc kubenswrapper[4482]: I1125 07:00:24.056718 4482 scope.go:117] "RemoveContainer" containerID="8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301" Nov 25 07:00:24 crc kubenswrapper[4482]: E1125 07:00:24.057136 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301\": container with ID starting with 8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301 not found: ID does not exist" containerID="8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301" Nov 25 07:00:24 crc kubenswrapper[4482]: I1125 07:00:24.057193 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301"} err="failed to get container status \"8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301\": rpc error: code = NotFound desc = could not find container \"8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301\": container with ID starting with 8de607c684de3e38877be5ac0a69093435732cb3510df59664bc9805c6e32301 not found: ID does not exist" Nov 25 07:00:24 crc kubenswrapper[4482]: I1125 07:00:24.057220 4482 scope.go:117] "RemoveContainer" containerID="2d00348d694a2df0dcbfc72ac9c229769060e8828b566747ea99eaf8db7a903e" Nov 25 07:00:24 crc kubenswrapper[4482]: E1125 07:00:24.057720 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d00348d694a2df0dcbfc72ac9c229769060e8828b566747ea99eaf8db7a903e\": container with ID starting with 2d00348d694a2df0dcbfc72ac9c229769060e8828b566747ea99eaf8db7a903e not found: ID does not exist" containerID="2d00348d694a2df0dcbfc72ac9c229769060e8828b566747ea99eaf8db7a903e" Nov 25 07:00:24 crc kubenswrapper[4482]: I1125 07:00:24.057807 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d00348d694a2df0dcbfc72ac9c229769060e8828b566747ea99eaf8db7a903e"} err="failed to get container status \"2d00348d694a2df0dcbfc72ac9c229769060e8828b566747ea99eaf8db7a903e\": rpc error: code = NotFound desc = could not find container \"2d00348d694a2df0dcbfc72ac9c229769060e8828b566747ea99eaf8db7a903e\": container with ID starting with 2d00348d694a2df0dcbfc72ac9c229769060e8828b566747ea99eaf8db7a903e not found: ID does not exist" Nov 25 07:00:24 crc kubenswrapper[4482]: I1125 07:00:24.057860 4482 scope.go:117] "RemoveContainer" containerID="9468e49d405889c86dec7ec0cd6ee3d0600ed1263630d3f7b2ef7b4606ed6280" Nov 25 07:00:24 crc kubenswrapper[4482]: E1125 07:00:24.058469 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9468e49d405889c86dec7ec0cd6ee3d0600ed1263630d3f7b2ef7b4606ed6280\": container with ID starting with 9468e49d405889c86dec7ec0cd6ee3d0600ed1263630d3f7b2ef7b4606ed6280 not found: ID does not exist" containerID="9468e49d405889c86dec7ec0cd6ee3d0600ed1263630d3f7b2ef7b4606ed6280" Nov 25 07:00:24 crc kubenswrapper[4482]: I1125 07:00:24.058505 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9468e49d405889c86dec7ec0cd6ee3d0600ed1263630d3f7b2ef7b4606ed6280"} err="failed to get container status \"9468e49d405889c86dec7ec0cd6ee3d0600ed1263630d3f7b2ef7b4606ed6280\": rpc error: code = NotFound desc = could not find container \"9468e49d405889c86dec7ec0cd6ee3d0600ed1263630d3f7b2ef7b4606ed6280\": container with ID starting with 9468e49d405889c86dec7ec0cd6ee3d0600ed1263630d3f7b2ef7b4606ed6280 not found: ID does not exist" Nov 25 07:00:25 crc kubenswrapper[4482]: I1125 07:00:25.427501 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vvhb9"] Nov 25 07:00:25 crc kubenswrapper[4482]: I1125 07:00:25.428018 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vvhb9" podUID="36202d20-113a-4c20-8f4d-1f85dc2c0853" containerName="registry-server" containerID="cri-o://3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa" gracePeriod=2 Nov 25 07:00:25 crc kubenswrapper[4482]: I1125 07:00:25.776868 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:25 crc kubenswrapper[4482]: I1125 07:00:25.838073 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" path="/var/lib/kubelet/pods/add1b1a6-f427-464f-93f1-4f2f2cd92e43/volumes" Nov 25 07:00:25 crc kubenswrapper[4482]: I1125 07:00:25.969072 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36202d20-113a-4c20-8f4d-1f85dc2c0853-catalog-content\") pod \"36202d20-113a-4c20-8f4d-1f85dc2c0853\" (UID: \"36202d20-113a-4c20-8f4d-1f85dc2c0853\") " Nov 25 07:00:25 crc kubenswrapper[4482]: I1125 07:00:25.969207 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwk77\" (UniqueName: \"kubernetes.io/projected/36202d20-113a-4c20-8f4d-1f85dc2c0853-kube-api-access-hwk77\") pod \"36202d20-113a-4c20-8f4d-1f85dc2c0853\" (UID: \"36202d20-113a-4c20-8f4d-1f85dc2c0853\") " Nov 25 07:00:25 crc kubenswrapper[4482]: I1125 07:00:25.969433 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36202d20-113a-4c20-8f4d-1f85dc2c0853-utilities\") pod \"36202d20-113a-4c20-8f4d-1f85dc2c0853\" (UID: \"36202d20-113a-4c20-8f4d-1f85dc2c0853\") " Nov 25 07:00:25 crc kubenswrapper[4482]: I1125 07:00:25.970057 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36202d20-113a-4c20-8f4d-1f85dc2c0853-utilities" (OuterVolumeSpecName: "utilities") pod "36202d20-113a-4c20-8f4d-1f85dc2c0853" (UID: "36202d20-113a-4c20-8f4d-1f85dc2c0853"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:00:25 crc kubenswrapper[4482]: I1125 07:00:25.978252 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36202d20-113a-4c20-8f4d-1f85dc2c0853-kube-api-access-hwk77" (OuterVolumeSpecName: "kube-api-access-hwk77") pod "36202d20-113a-4c20-8f4d-1f85dc2c0853" (UID: "36202d20-113a-4c20-8f4d-1f85dc2c0853"). InnerVolumeSpecName "kube-api-access-hwk77". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.013006 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36202d20-113a-4c20-8f4d-1f85dc2c0853-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36202d20-113a-4c20-8f4d-1f85dc2c0853" (UID: "36202d20-113a-4c20-8f4d-1f85dc2c0853"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.016488 4482 generic.go:334] "Generic (PLEG): container finished" podID="36202d20-113a-4c20-8f4d-1f85dc2c0853" containerID="3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa" exitCode=0 Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.016550 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvhb9" event={"ID":"36202d20-113a-4c20-8f4d-1f85dc2c0853","Type":"ContainerDied","Data":"3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa"} Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.016584 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvhb9" event={"ID":"36202d20-113a-4c20-8f4d-1f85dc2c0853","Type":"ContainerDied","Data":"6b77db6a1b0d84c49979a449decdcfd7476cafbbce208170e473c5185d82ccad"} Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.016587 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvhb9" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.016602 4482 scope.go:117] "RemoveContainer" containerID="3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.037247 4482 scope.go:117] "RemoveContainer" containerID="35c53138b1156e08b4df83bc1cdde2a44c80d0e432c7113e03d8933b779d7c49" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.046606 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vvhb9"] Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.051010 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vvhb9"] Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.063058 4482 scope.go:117] "RemoveContainer" containerID="28d1a91e874542e8979b1c63485f3df6eb351b8d34383af5b82c7aa7b4add10a" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.071814 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwk77\" (UniqueName: \"kubernetes.io/projected/36202d20-113a-4c20-8f4d-1f85dc2c0853-kube-api-access-hwk77\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.071843 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36202d20-113a-4c20-8f4d-1f85dc2c0853-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.071853 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36202d20-113a-4c20-8f4d-1f85dc2c0853-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.076912 4482 scope.go:117] "RemoveContainer" containerID="3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa" Nov 25 07:00:26 crc kubenswrapper[4482]: E1125 07:00:26.077683 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa\": container with ID starting with 3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa not found: ID does not exist" containerID="3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.077721 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa"} err="failed to get container status \"3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa\": rpc error: code = NotFound desc = could not find container \"3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa\": container with ID starting with 3f24dcfc0009fbc4dc4de76154c2fb3a37158d17919c93224a890c9f6b8d4ffa not found: ID does not exist" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.077746 4482 scope.go:117] "RemoveContainer" containerID="35c53138b1156e08b4df83bc1cdde2a44c80d0e432c7113e03d8933b779d7c49" Nov 25 07:00:26 crc kubenswrapper[4482]: E1125 07:00:26.078486 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35c53138b1156e08b4df83bc1cdde2a44c80d0e432c7113e03d8933b779d7c49\": container with ID starting with 35c53138b1156e08b4df83bc1cdde2a44c80d0e432c7113e03d8933b779d7c49 not found: ID does not exist" containerID="35c53138b1156e08b4df83bc1cdde2a44c80d0e432c7113e03d8933b779d7c49" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.078525 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35c53138b1156e08b4df83bc1cdde2a44c80d0e432c7113e03d8933b779d7c49"} err="failed to get container status \"35c53138b1156e08b4df83bc1cdde2a44c80d0e432c7113e03d8933b779d7c49\": rpc error: code = NotFound desc = could not find container \"35c53138b1156e08b4df83bc1cdde2a44c80d0e432c7113e03d8933b779d7c49\": container with ID starting with 35c53138b1156e08b4df83bc1cdde2a44c80d0e432c7113e03d8933b779d7c49 not found: ID does not exist" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.078550 4482 scope.go:117] "RemoveContainer" containerID="28d1a91e874542e8979b1c63485f3df6eb351b8d34383af5b82c7aa7b4add10a" Nov 25 07:00:26 crc kubenswrapper[4482]: E1125 07:00:26.079694 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d1a91e874542e8979b1c63485f3df6eb351b8d34383af5b82c7aa7b4add10a\": container with ID starting with 28d1a91e874542e8979b1c63485f3df6eb351b8d34383af5b82c7aa7b4add10a not found: ID does not exist" containerID="28d1a91e874542e8979b1c63485f3df6eb351b8d34383af5b82c7aa7b4add10a" Nov 25 07:00:26 crc kubenswrapper[4482]: I1125 07:00:26.079747 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d1a91e874542e8979b1c63485f3df6eb351b8d34383af5b82c7aa7b4add10a"} err="failed to get container status \"28d1a91e874542e8979b1c63485f3df6eb351b8d34383af5b82c7aa7b4add10a\": rpc error: code = NotFound desc = could not find container \"28d1a91e874542e8979b1c63485f3df6eb351b8d34383af5b82c7aa7b4add10a\": container with ID starting with 28d1a91e874542e8979b1c63485f3df6eb351b8d34383af5b82c7aa7b4add10a not found: ID does not exist" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.848722 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36202d20-113a-4c20-8f4d-1f85dc2c0853" path="/var/lib/kubelet/pods/36202d20-113a-4c20-8f4d-1f85dc2c0853/volumes" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.863629 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qcvn9"] Nov 25 07:00:27 crc kubenswrapper[4482]: E1125 07:00:27.864124 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36202d20-113a-4c20-8f4d-1f85dc2c0853" containerName="registry-server" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.864217 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="36202d20-113a-4c20-8f4d-1f85dc2c0853" containerName="registry-server" Nov 25 07:00:27 crc kubenswrapper[4482]: E1125 07:00:27.864277 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" containerName="extract-content" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.864322 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" containerName="extract-content" Nov 25 07:00:27 crc kubenswrapper[4482]: E1125 07:00:27.864399 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36202d20-113a-4c20-8f4d-1f85dc2c0853" containerName="extract-content" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.864445 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="36202d20-113a-4c20-8f4d-1f85dc2c0853" containerName="extract-content" Nov 25 07:00:27 crc kubenswrapper[4482]: E1125 07:00:27.864509 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079e38f7-2ae4-43ed-a466-09930f83d081" containerName="extract-content" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.864554 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="079e38f7-2ae4-43ed-a466-09930f83d081" containerName="extract-content" Nov 25 07:00:27 crc kubenswrapper[4482]: E1125 07:00:27.864609 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36202d20-113a-4c20-8f4d-1f85dc2c0853" containerName="extract-utilities" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.864656 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="36202d20-113a-4c20-8f4d-1f85dc2c0853" containerName="extract-utilities" Nov 25 07:00:27 crc kubenswrapper[4482]: E1125 07:00:27.864717 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079e38f7-2ae4-43ed-a466-09930f83d081" containerName="extract-utilities" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.864761 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="079e38f7-2ae4-43ed-a466-09930f83d081" containerName="extract-utilities" Nov 25 07:00:27 crc kubenswrapper[4482]: E1125 07:00:27.864830 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079e38f7-2ae4-43ed-a466-09930f83d081" containerName="registry-server" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.864884 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="079e38f7-2ae4-43ed-a466-09930f83d081" containerName="registry-server" Nov 25 07:00:27 crc kubenswrapper[4482]: E1125 07:00:27.864945 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" containerName="extract-utilities" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.864989 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" containerName="extract-utilities" Nov 25 07:00:27 crc kubenswrapper[4482]: E1125 07:00:27.865053 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" containerName="registry-server" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.865099 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" containerName="registry-server" Nov 25 07:00:27 crc kubenswrapper[4482]: E1125 07:00:27.865146 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef458b3-5100-4773-8b07-ed066b2b29ee" containerName="collect-profiles" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.865217 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef458b3-5100-4773-8b07-ed066b2b29ee" containerName="collect-profiles" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.865457 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="079e38f7-2ae4-43ed-a466-09930f83d081" containerName="registry-server" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.865522 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="add1b1a6-f427-464f-93f1-4f2f2cd92e43" containerName="registry-server" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.865582 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef458b3-5100-4773-8b07-ed066b2b29ee" containerName="collect-profiles" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.865628 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="36202d20-113a-4c20-8f4d-1f85dc2c0853" containerName="registry-server" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.868914 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:27 crc kubenswrapper[4482]: I1125 07:00:27.880821 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qcvn9"] Nov 25 07:00:28 crc kubenswrapper[4482]: I1125 07:00:28.006115 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-utilities\") pod \"redhat-operators-qcvn9\" (UID: \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\") " pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:28 crc kubenswrapper[4482]: I1125 07:00:28.006403 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6brr\" (UniqueName: \"kubernetes.io/projected/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-kube-api-access-p6brr\") pod \"redhat-operators-qcvn9\" (UID: \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\") " pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:28 crc kubenswrapper[4482]: I1125 07:00:28.006448 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-catalog-content\") pod \"redhat-operators-qcvn9\" (UID: \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\") " pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:28 crc kubenswrapper[4482]: I1125 07:00:28.108147 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6brr\" (UniqueName: \"kubernetes.io/projected/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-kube-api-access-p6brr\") pod \"redhat-operators-qcvn9\" (UID: \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\") " pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:28 crc kubenswrapper[4482]: I1125 07:00:28.108351 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-catalog-content\") pod \"redhat-operators-qcvn9\" (UID: \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\") " pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:28 crc kubenswrapper[4482]: I1125 07:00:28.108505 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-utilities\") pod \"redhat-operators-qcvn9\" (UID: \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\") " pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:28 crc kubenswrapper[4482]: I1125 07:00:28.108783 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-catalog-content\") pod \"redhat-operators-qcvn9\" (UID: \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\") " pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:28 crc kubenswrapper[4482]: I1125 07:00:28.108819 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-utilities\") pod \"redhat-operators-qcvn9\" (UID: \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\") " pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:28 crc kubenswrapper[4482]: I1125 07:00:28.124130 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6brr\" (UniqueName: \"kubernetes.io/projected/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-kube-api-access-p6brr\") pod \"redhat-operators-qcvn9\" (UID: \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\") " pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:28 crc kubenswrapper[4482]: I1125 07:00:28.184817 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:28 crc kubenswrapper[4482]: I1125 07:00:28.579469 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qcvn9"] Nov 25 07:00:29 crc kubenswrapper[4482]: I1125 07:00:29.039331 4482 generic.go:334] "Generic (PLEG): container finished" podID="42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" containerID="772c1e1e8355d00d5537dab05858c566cadfc63aee9dc49519a541001d66994e" exitCode=0 Nov 25 07:00:29 crc kubenswrapper[4482]: I1125 07:00:29.039420 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcvn9" event={"ID":"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8","Type":"ContainerDied","Data":"772c1e1e8355d00d5537dab05858c566cadfc63aee9dc49519a541001d66994e"} Nov 25 07:00:29 crc kubenswrapper[4482]: I1125 07:00:29.039681 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcvn9" event={"ID":"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8","Type":"ContainerStarted","Data":"414595aba9458d02c0c65df36fd60e5d0fa5c9257c6c991ddc98269433beadc5"} Nov 25 07:00:30 crc kubenswrapper[4482]: I1125 07:00:30.047484 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcvn9" event={"ID":"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8","Type":"ContainerStarted","Data":"752f4ceee0d642101c009a176f211c54840e1a18fe2b8a1f64e631f672db593a"} Nov 25 07:00:31 crc kubenswrapper[4482]: I1125 07:00:31.059695 4482 generic.go:334] "Generic (PLEG): container finished" podID="42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" containerID="752f4ceee0d642101c009a176f211c54840e1a18fe2b8a1f64e631f672db593a" exitCode=0 Nov 25 07:00:31 crc kubenswrapper[4482]: I1125 07:00:31.059746 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcvn9" event={"ID":"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8","Type":"ContainerDied","Data":"752f4ceee0d642101c009a176f211c54840e1a18fe2b8a1f64e631f672db593a"} Nov 25 07:00:32 crc kubenswrapper[4482]: I1125 07:00:32.071602 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcvn9" event={"ID":"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8","Type":"ContainerStarted","Data":"0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf"} Nov 25 07:00:32 crc kubenswrapper[4482]: I1125 07:00:32.090732 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qcvn9" podStartSLOduration=2.607725875 podStartE2EDuration="5.090713816s" podCreationTimestamp="2025-11-25 07:00:27 +0000 UTC" firstStartedPulling="2025-11-25 07:00:29.040815156 +0000 UTC m=+803.529046416" lastFinishedPulling="2025-11-25 07:00:31.523803097 +0000 UTC m=+806.012034357" observedRunningTime="2025-11-25 07:00:32.087043428 +0000 UTC m=+806.575274688" watchObservedRunningTime="2025-11-25 07:00:32.090713816 +0000 UTC m=+806.578945074" Nov 25 07:00:35 crc kubenswrapper[4482]: I1125 07:00:35.816722 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59899b64cc-ffbfd"] Nov 25 07:00:35 crc kubenswrapper[4482]: I1125 07:00:35.818533 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" Nov 25 07:00:35 crc kubenswrapper[4482]: I1125 07:00:35.821186 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 07:00:35 crc kubenswrapper[4482]: I1125 07:00:35.821230 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-2b78f" Nov 25 07:00:35 crc kubenswrapper[4482]: I1125 07:00:35.821235 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 07:00:35 crc kubenswrapper[4482]: I1125 07:00:35.821317 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 07:00:35 crc kubenswrapper[4482]: I1125 07:00:35.839511 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59899b64cc-ffbfd"] Nov 25 07:00:35 crc kubenswrapper[4482]: I1125 07:00:35.918683 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bbd9697cc-25dts"] Nov 25 07:00:35 crc kubenswrapper[4482]: I1125 07:00:35.920409 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:00:35 crc kubenswrapper[4482]: I1125 07:00:35.925852 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bbd9697cc-25dts"] Nov 25 07:00:35 crc kubenswrapper[4482]: I1125 07:00:35.929916 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.020345 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw8pf\" (UniqueName: \"kubernetes.io/projected/66afc9c3-310f-426e-a54e-3ef9d8888a32-kube-api-access-mw8pf\") pod \"dnsmasq-dns-59899b64cc-ffbfd\" (UID: \"66afc9c3-310f-426e-a54e-3ef9d8888a32\") " pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.020437 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2e203bd-17c2-478b-9682-9e443e72e76d-dns-svc\") pod \"dnsmasq-dns-7bbd9697cc-25dts\" (UID: \"b2e203bd-17c2-478b-9682-9e443e72e76d\") " pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.020476 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49mnv\" (UniqueName: \"kubernetes.io/projected/b2e203bd-17c2-478b-9682-9e443e72e76d-kube-api-access-49mnv\") pod \"dnsmasq-dns-7bbd9697cc-25dts\" (UID: \"b2e203bd-17c2-478b-9682-9e443e72e76d\") " pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.020492 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66afc9c3-310f-426e-a54e-3ef9d8888a32-config\") pod \"dnsmasq-dns-59899b64cc-ffbfd\" (UID: \"66afc9c3-310f-426e-a54e-3ef9d8888a32\") " pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.020513 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e203bd-17c2-478b-9682-9e443e72e76d-config\") pod \"dnsmasq-dns-7bbd9697cc-25dts\" (UID: \"b2e203bd-17c2-478b-9682-9e443e72e76d\") " pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.122642 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw8pf\" (UniqueName: \"kubernetes.io/projected/66afc9c3-310f-426e-a54e-3ef9d8888a32-kube-api-access-mw8pf\") pod \"dnsmasq-dns-59899b64cc-ffbfd\" (UID: \"66afc9c3-310f-426e-a54e-3ef9d8888a32\") " pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.122700 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2e203bd-17c2-478b-9682-9e443e72e76d-dns-svc\") pod \"dnsmasq-dns-7bbd9697cc-25dts\" (UID: \"b2e203bd-17c2-478b-9682-9e443e72e76d\") " pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.122732 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49mnv\" (UniqueName: \"kubernetes.io/projected/b2e203bd-17c2-478b-9682-9e443e72e76d-kube-api-access-49mnv\") pod \"dnsmasq-dns-7bbd9697cc-25dts\" (UID: \"b2e203bd-17c2-478b-9682-9e443e72e76d\") " pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.122750 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66afc9c3-310f-426e-a54e-3ef9d8888a32-config\") pod \"dnsmasq-dns-59899b64cc-ffbfd\" (UID: \"66afc9c3-310f-426e-a54e-3ef9d8888a32\") " pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.122768 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e203bd-17c2-478b-9682-9e443e72e76d-config\") pod \"dnsmasq-dns-7bbd9697cc-25dts\" (UID: \"b2e203bd-17c2-478b-9682-9e443e72e76d\") " pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.123710 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e203bd-17c2-478b-9682-9e443e72e76d-config\") pod \"dnsmasq-dns-7bbd9697cc-25dts\" (UID: \"b2e203bd-17c2-478b-9682-9e443e72e76d\") " pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.123769 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2e203bd-17c2-478b-9682-9e443e72e76d-dns-svc\") pod \"dnsmasq-dns-7bbd9697cc-25dts\" (UID: \"b2e203bd-17c2-478b-9682-9e443e72e76d\") " pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.123938 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66afc9c3-310f-426e-a54e-3ef9d8888a32-config\") pod \"dnsmasq-dns-59899b64cc-ffbfd\" (UID: \"66afc9c3-310f-426e-a54e-3ef9d8888a32\") " pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.148523 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49mnv\" (UniqueName: \"kubernetes.io/projected/b2e203bd-17c2-478b-9682-9e443e72e76d-kube-api-access-49mnv\") pod \"dnsmasq-dns-7bbd9697cc-25dts\" (UID: \"b2e203bd-17c2-478b-9682-9e443e72e76d\") " pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.159567 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw8pf\" (UniqueName: \"kubernetes.io/projected/66afc9c3-310f-426e-a54e-3ef9d8888a32-kube-api-access-mw8pf\") pod \"dnsmasq-dns-59899b64cc-ffbfd\" (UID: \"66afc9c3-310f-426e-a54e-3ef9d8888a32\") " pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.252283 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.436337 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.709996 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bbd9697cc-25dts"] Nov 25 07:00:36 crc kubenswrapper[4482]: I1125 07:00:36.921425 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59899b64cc-ffbfd"] Nov 25 07:00:36 crc kubenswrapper[4482]: W1125 07:00:36.925446 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66afc9c3_310f_426e_a54e_3ef9d8888a32.slice/crio-d985a89b7a09822eab1cbd5f6b3b8b159eb321766b4aee1d54be6fe1816f9cc9 WatchSource:0}: Error finding container d985a89b7a09822eab1cbd5f6b3b8b159eb321766b4aee1d54be6fe1816f9cc9: Status 404 returned error can't find the container with id d985a89b7a09822eab1cbd5f6b3b8b159eb321766b4aee1d54be6fe1816f9cc9 Nov 25 07:00:37 crc kubenswrapper[4482]: I1125 07:00:37.113917 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" event={"ID":"66afc9c3-310f-426e-a54e-3ef9d8888a32","Type":"ContainerStarted","Data":"d985a89b7a09822eab1cbd5f6b3b8b159eb321766b4aee1d54be6fe1816f9cc9"} Nov 25 07:00:37 crc kubenswrapper[4482]: I1125 07:00:37.115640 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" event={"ID":"b2e203bd-17c2-478b-9682-9e443e72e76d","Type":"ContainerStarted","Data":"6bce75a19bc852ea7572be26bae7e6edf6e129a9b73de1416ff3ba32bc3fded0"} Nov 25 07:00:38 crc kubenswrapper[4482]: I1125 07:00:38.185704 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:38 crc kubenswrapper[4482]: I1125 07:00:38.186490 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:38 crc kubenswrapper[4482]: I1125 07:00:38.250376 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:38 crc kubenswrapper[4482]: I1125 07:00:38.777142 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59899b64cc-ffbfd"] Nov 25 07:00:38 crc kubenswrapper[4482]: I1125 07:00:38.804138 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848c894d9c-f46fl"] Nov 25 07:00:38 crc kubenswrapper[4482]: I1125 07:00:38.805120 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:00:38 crc kubenswrapper[4482]: I1125 07:00:38.823202 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848c894d9c-f46fl"] Nov 25 07:00:38 crc kubenswrapper[4482]: I1125 07:00:38.986221 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/685b0725-2c7f-4039-9471-9b596206232d-dns-svc\") pod \"dnsmasq-dns-848c894d9c-f46fl\" (UID: \"685b0725-2c7f-4039-9471-9b596206232d\") " pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:00:38 crc kubenswrapper[4482]: I1125 07:00:38.986758 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/685b0725-2c7f-4039-9471-9b596206232d-config\") pod \"dnsmasq-dns-848c894d9c-f46fl\" (UID: \"685b0725-2c7f-4039-9471-9b596206232d\") " pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:00:38 crc kubenswrapper[4482]: I1125 07:00:38.986845 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmh4g\" (UniqueName: \"kubernetes.io/projected/685b0725-2c7f-4039-9471-9b596206232d-kube-api-access-zmh4g\") pod \"dnsmasq-dns-848c894d9c-f46fl\" (UID: \"685b0725-2c7f-4039-9471-9b596206232d\") " pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.088860 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/685b0725-2c7f-4039-9471-9b596206232d-config\") pod \"dnsmasq-dns-848c894d9c-f46fl\" (UID: \"685b0725-2c7f-4039-9471-9b596206232d\") " pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.088955 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmh4g\" (UniqueName: \"kubernetes.io/projected/685b0725-2c7f-4039-9471-9b596206232d-kube-api-access-zmh4g\") pod \"dnsmasq-dns-848c894d9c-f46fl\" (UID: \"685b0725-2c7f-4039-9471-9b596206232d\") " pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.089046 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/685b0725-2c7f-4039-9471-9b596206232d-dns-svc\") pod \"dnsmasq-dns-848c894d9c-f46fl\" (UID: \"685b0725-2c7f-4039-9471-9b596206232d\") " pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.089858 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/685b0725-2c7f-4039-9471-9b596206232d-dns-svc\") pod \"dnsmasq-dns-848c894d9c-f46fl\" (UID: \"685b0725-2c7f-4039-9471-9b596206232d\") " pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.090601 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/685b0725-2c7f-4039-9471-9b596206232d-config\") pod \"dnsmasq-dns-848c894d9c-f46fl\" (UID: \"685b0725-2c7f-4039-9471-9b596206232d\") " pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.126933 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmh4g\" (UniqueName: \"kubernetes.io/projected/685b0725-2c7f-4039-9471-9b596206232d-kube-api-access-zmh4g\") pod \"dnsmasq-dns-848c894d9c-f46fl\" (UID: \"685b0725-2c7f-4039-9471-9b596206232d\") " pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.128808 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bbd9697cc-25dts"] Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.137135 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.152942 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-657d948df5-trc69"] Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.153921 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.173357 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-657d948df5-trc69"] Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.240068 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.317730 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8096a59a-651e-416c-99a1-95e4f8ed8f22-dns-svc\") pod \"dnsmasq-dns-657d948df5-trc69\" (UID: \"8096a59a-651e-416c-99a1-95e4f8ed8f22\") " pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.317936 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9rc5\" (UniqueName: \"kubernetes.io/projected/8096a59a-651e-416c-99a1-95e4f8ed8f22-kube-api-access-t9rc5\") pod \"dnsmasq-dns-657d948df5-trc69\" (UID: \"8096a59a-651e-416c-99a1-95e4f8ed8f22\") " pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.318671 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8096a59a-651e-416c-99a1-95e4f8ed8f22-config\") pod \"dnsmasq-dns-657d948df5-trc69\" (UID: \"8096a59a-651e-416c-99a1-95e4f8ed8f22\") " pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.345536 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qcvn9"] Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.422148 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8096a59a-651e-416c-99a1-95e4f8ed8f22-config\") pod \"dnsmasq-dns-657d948df5-trc69\" (UID: \"8096a59a-651e-416c-99a1-95e4f8ed8f22\") " pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.422223 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8096a59a-651e-416c-99a1-95e4f8ed8f22-dns-svc\") pod \"dnsmasq-dns-657d948df5-trc69\" (UID: \"8096a59a-651e-416c-99a1-95e4f8ed8f22\") " pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.422279 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9rc5\" (UniqueName: \"kubernetes.io/projected/8096a59a-651e-416c-99a1-95e4f8ed8f22-kube-api-access-t9rc5\") pod \"dnsmasq-dns-657d948df5-trc69\" (UID: \"8096a59a-651e-416c-99a1-95e4f8ed8f22\") " pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.422949 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8096a59a-651e-416c-99a1-95e4f8ed8f22-config\") pod \"dnsmasq-dns-657d948df5-trc69\" (UID: \"8096a59a-651e-416c-99a1-95e4f8ed8f22\") " pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.423096 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8096a59a-651e-416c-99a1-95e4f8ed8f22-dns-svc\") pod \"dnsmasq-dns-657d948df5-trc69\" (UID: \"8096a59a-651e-416c-99a1-95e4f8ed8f22\") " pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.449126 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9rc5\" (UniqueName: \"kubernetes.io/projected/8096a59a-651e-416c-99a1-95e4f8ed8f22-kube-api-access-t9rc5\") pod \"dnsmasq-dns-657d948df5-trc69\" (UID: \"8096a59a-651e-416c-99a1-95e4f8ed8f22\") " pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.488159 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.808406 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848c894d9c-f46fl"] Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.996437 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:00:39 crc kubenswrapper[4482]: I1125 07:00:39.997478 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:39.998988 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:39.999621 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:39.999769 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.000042 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.000268 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.000431 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-v98p2" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.000573 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.072576 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.121491 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-657d948df5-trc69"] Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.134841 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80610219-52d0-4832-9586-5f565148e662-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.134886 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80610219-52d0-4832-9586-5f565148e662-pod-info\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.134970 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.134988 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.135028 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s96gq\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-kube-api-access-s96gq\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.135047 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.135082 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80610219-52d0-4832-9586-5f565148e662-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.135118 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.135148 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80610219-52d0-4832-9586-5f565148e662-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.135185 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-config-data\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.135207 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-server-conf\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.175296 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848c894d9c-f46fl" event={"ID":"685b0725-2c7f-4039-9471-9b596206232d","Type":"ContainerStarted","Data":"9a37278929948ab3f546a2a3c6fb1aac1eec95ff67edd5a77dfbb30c49713bcf"} Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.237118 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80610219-52d0-4832-9586-5f565148e662-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.237181 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-config-data\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.237207 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-server-conf\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.237244 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80610219-52d0-4832-9586-5f565148e662-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.237259 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80610219-52d0-4832-9586-5f565148e662-pod-info\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.237291 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.237307 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.237337 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s96gq\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-kube-api-access-s96gq\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.237356 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.237396 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80610219-52d0-4832-9586-5f565148e662-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.237426 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.238277 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-config-data\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.238413 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-server-conf\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.238552 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.238933 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.240617 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80610219-52d0-4832-9586-5f565148e662-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.240828 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80610219-52d0-4832-9586-5f565148e662-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.243891 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.245213 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80610219-52d0-4832-9586-5f565148e662-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.249619 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80610219-52d0-4832-9586-5f565148e662-pod-info\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.250397 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.268824 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s96gq\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-kube-api-access-s96gq\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.284731 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.317882 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.318892 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.323822 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-z2r8l" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.323824 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.323823 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.325569 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.325625 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.326017 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.327589 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.345597 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.351954 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.450771 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.450818 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.450894 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.450960 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0f200db-f6f1-403b-bad6-85a803b5237c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.450977 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7smv\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-kube-api-access-m7smv\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.451011 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0f200db-f6f1-403b-bad6-85a803b5237c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.451073 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.451097 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.451264 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.451591 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.451744 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.553113 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.553153 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.553199 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.553260 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.553329 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.553349 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.553373 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.553443 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.553509 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0f200db-f6f1-403b-bad6-85a803b5237c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.553528 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7smv\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-kube-api-access-m7smv\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.553579 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0f200db-f6f1-403b-bad6-85a803b5237c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.554905 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.556381 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.556926 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.557354 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.557690 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.558322 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.561593 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.561937 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0f200db-f6f1-403b-bad6-85a803b5237c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.565895 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.575587 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0f200db-f6f1-403b-bad6-85a803b5237c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.582511 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7smv\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-kube-api-access-m7smv\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.589670 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.639495 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:00:40 crc kubenswrapper[4482]: I1125 07:00:40.805905 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.135662 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:00:41 crc kubenswrapper[4482]: W1125 07:00:41.152012 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0f200db_f6f1_403b_bad6_85a803b5237c.slice/crio-a5a69402ad8513413eb76851255f730ef202704c9dea30790bf94e220e98052c WatchSource:0}: Error finding container a5a69402ad8513413eb76851255f730ef202704c9dea30790bf94e220e98052c: Status 404 returned error can't find the container with id a5a69402ad8513413eb76851255f730ef202704c9dea30790bf94e220e98052c Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.207802 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80610219-52d0-4832-9586-5f565148e662","Type":"ContainerStarted","Data":"a0cfdde975fd2197382ddfd7497534314ae85307bdff34c70db5cebfee330941"} Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.209237 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657d948df5-trc69" event={"ID":"8096a59a-651e-416c-99a1-95e4f8ed8f22","Type":"ContainerStarted","Data":"189bbe119da2125c747dd8ac1e19591b195f539c89ea2492cd69927292bce232"} Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.211086 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0f200db-f6f1-403b-bad6-85a803b5237c","Type":"ContainerStarted","Data":"a5a69402ad8513413eb76851255f730ef202704c9dea30790bf94e220e98052c"} Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.211306 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qcvn9" podUID="42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" containerName="registry-server" containerID="cri-o://0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf" gracePeriod=2 Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.623442 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.712207 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 25 07:00:41 crc kubenswrapper[4482]: E1125 07:00:41.712636 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" containerName="registry-server" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.712652 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" containerName="registry-server" Nov 25 07:00:41 crc kubenswrapper[4482]: E1125 07:00:41.712678 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" containerName="extract-utilities" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.712684 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" containerName="extract-utilities" Nov 25 07:00:41 crc kubenswrapper[4482]: E1125 07:00:41.712706 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" containerName="extract-content" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.712712 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" containerName="extract-content" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.712875 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" containerName="registry-server" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.713751 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.716723 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.717303 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.717909 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.720912 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-t5vb8" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.722383 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.739761 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.789963 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-catalog-content\") pod \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\" (UID: \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\") " Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.790843 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6brr\" (UniqueName: \"kubernetes.io/projected/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-kube-api-access-p6brr\") pod \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\" (UID: \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\") " Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.791083 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-utilities\") pod \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\" (UID: \"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8\") " Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.792260 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-utilities" (OuterVolumeSpecName: "utilities") pod "42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" (UID: "42e7dfee-d526-45c7-9e86-1a5c2be6f9a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.799511 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-kube-api-access-p6brr" (OuterVolumeSpecName: "kube-api-access-p6brr") pod "42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" (UID: "42e7dfee-d526-45c7-9e86-1a5c2be6f9a8"). InnerVolumeSpecName "kube-api-access-p6brr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.897884 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.897969 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.898025 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.898055 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.898161 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvw2\" (UniqueName: \"kubernetes.io/projected/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-kube-api-access-qgvw2\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.898243 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-config-data-default\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.898282 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.898353 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-kolla-config\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.898610 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6brr\" (UniqueName: \"kubernetes.io/projected/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-kube-api-access-p6brr\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.898635 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:41 crc kubenswrapper[4482]: I1125 07:00:41.952658 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" (UID: "42e7dfee-d526-45c7-9e86-1a5c2be6f9a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.001081 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgvw2\" (UniqueName: \"kubernetes.io/projected/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-kube-api-access-qgvw2\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.001145 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-config-data-default\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.001495 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.001560 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-kolla-config\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.001738 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.001770 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.001809 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.001825 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.001888 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.004090 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-config-data-default\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.004143 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.004525 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.005726 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-kolla-config\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.010214 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.028138 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.034688 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.036643 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgvw2\" (UniqueName: \"kubernetes.io/projected/1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9-kube-api-access-qgvw2\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.038694 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9\") " pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.227474 4482 generic.go:334] "Generic (PLEG): container finished" podID="42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" containerID="0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf" exitCode=0 Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.227525 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcvn9" event={"ID":"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8","Type":"ContainerDied","Data":"0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf"} Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.227565 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcvn9" event={"ID":"42e7dfee-d526-45c7-9e86-1a5c2be6f9a8","Type":"ContainerDied","Data":"414595aba9458d02c0c65df36fd60e5d0fa5c9257c6c991ddc98269433beadc5"} Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.227571 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qcvn9" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.227584 4482 scope.go:117] "RemoveContainer" containerID="0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.259550 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qcvn9"] Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.264686 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qcvn9"] Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.292257 4482 scope.go:117] "RemoveContainer" containerID="752f4ceee0d642101c009a176f211c54840e1a18fe2b8a1f64e631f672db593a" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.336329 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.341074 4482 scope.go:117] "RemoveContainer" containerID="772c1e1e8355d00d5537dab05858c566cadfc63aee9dc49519a541001d66994e" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.432655 4482 scope.go:117] "RemoveContainer" containerID="0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf" Nov 25 07:00:42 crc kubenswrapper[4482]: E1125 07:00:42.433647 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf\": container with ID starting with 0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf not found: ID does not exist" containerID="0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.433731 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf"} err="failed to get container status \"0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf\": rpc error: code = NotFound desc = could not find container \"0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf\": container with ID starting with 0c11ccb89ed16350938d2653112798b81ee6f070fc8a558debe37bba866b5fdf not found: ID does not exist" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.433767 4482 scope.go:117] "RemoveContainer" containerID="752f4ceee0d642101c009a176f211c54840e1a18fe2b8a1f64e631f672db593a" Nov 25 07:00:42 crc kubenswrapper[4482]: E1125 07:00:42.434434 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"752f4ceee0d642101c009a176f211c54840e1a18fe2b8a1f64e631f672db593a\": container with ID starting with 752f4ceee0d642101c009a176f211c54840e1a18fe2b8a1f64e631f672db593a not found: ID does not exist" containerID="752f4ceee0d642101c009a176f211c54840e1a18fe2b8a1f64e631f672db593a" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.434478 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"752f4ceee0d642101c009a176f211c54840e1a18fe2b8a1f64e631f672db593a"} err="failed to get container status \"752f4ceee0d642101c009a176f211c54840e1a18fe2b8a1f64e631f672db593a\": rpc error: code = NotFound desc = could not find container \"752f4ceee0d642101c009a176f211c54840e1a18fe2b8a1f64e631f672db593a\": container with ID starting with 752f4ceee0d642101c009a176f211c54840e1a18fe2b8a1f64e631f672db593a not found: ID does not exist" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.434501 4482 scope.go:117] "RemoveContainer" containerID="772c1e1e8355d00d5537dab05858c566cadfc63aee9dc49519a541001d66994e" Nov 25 07:00:42 crc kubenswrapper[4482]: E1125 07:00:42.435420 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"772c1e1e8355d00d5537dab05858c566cadfc63aee9dc49519a541001d66994e\": container with ID starting with 772c1e1e8355d00d5537dab05858c566cadfc63aee9dc49519a541001d66994e not found: ID does not exist" containerID="772c1e1e8355d00d5537dab05858c566cadfc63aee9dc49519a541001d66994e" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.435482 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"772c1e1e8355d00d5537dab05858c566cadfc63aee9dc49519a541001d66994e"} err="failed to get container status \"772c1e1e8355d00d5537dab05858c566cadfc63aee9dc49519a541001d66994e\": rpc error: code = NotFound desc = could not find container \"772c1e1e8355d00d5537dab05858c566cadfc63aee9dc49519a541001d66994e\": container with ID starting with 772c1e1e8355d00d5537dab05858c566cadfc63aee9dc49519a541001d66994e not found: ID does not exist" Nov 25 07:00:42 crc kubenswrapper[4482]: I1125 07:00:42.968250 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.102860 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.103886 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.106312 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-kkk2l" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.107588 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.118836 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.119316 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.121055 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.228981 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.229039 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.229097 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.229236 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m99gp\" (UniqueName: \"kubernetes.io/projected/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-kube-api-access-m99gp\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.229289 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.229489 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.229529 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.229600 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.259585 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9","Type":"ContainerStarted","Data":"1ad1878e773b22e34ed52b73934269f7321f7ad4c4c9cf53293799cfa0c12102"} Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.331784 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.331836 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.333687 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.334047 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.334101 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.334134 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.334155 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.334185 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.334229 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m99gp\" (UniqueName: \"kubernetes.io/projected/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-kube-api-access-m99gp\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.334661 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.334696 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.335261 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.335771 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.339292 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.342052 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.347874 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m99gp\" (UniqueName: \"kubernetes.io/projected/685ea58c-3786-479c-bc85-9bd2ebd3d9a7-kube-api-access-m99gp\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.387673 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"685ea58c-3786-479c-bc85-9bd2ebd3d9a7\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.458351 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.636638 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.637707 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.646198 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.646355 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2s94l" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.646575 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.656349 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.742301 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9bced979-1034-4b28-8059-15a06044eed8-kolla-config\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.742622 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bced979-1034-4b28-8059-15a06044eed8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.742663 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bced979-1034-4b28-8059-15a06044eed8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.742704 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bced979-1034-4b28-8059-15a06044eed8-config-data\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.742763 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff5rl\" (UniqueName: \"kubernetes.io/projected/9bced979-1034-4b28-8059-15a06044eed8-kube-api-access-ff5rl\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.844674 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bced979-1034-4b28-8059-15a06044eed8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.844730 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bced979-1034-4b28-8059-15a06044eed8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.844768 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bced979-1034-4b28-8059-15a06044eed8-config-data\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.844815 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff5rl\" (UniqueName: \"kubernetes.io/projected/9bced979-1034-4b28-8059-15a06044eed8-kube-api-access-ff5rl\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.844901 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9bced979-1034-4b28-8059-15a06044eed8-kolla-config\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.845689 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9bced979-1034-4b28-8059-15a06044eed8-kolla-config\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.846006 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bced979-1034-4b28-8059-15a06044eed8-config-data\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.850632 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bced979-1034-4b28-8059-15a06044eed8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.850737 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bced979-1034-4b28-8059-15a06044eed8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.871531 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff5rl\" (UniqueName: \"kubernetes.io/projected/9bced979-1034-4b28-8059-15a06044eed8-kube-api-access-ff5rl\") pod \"memcached-0\" (UID: \"9bced979-1034-4b28-8059-15a06044eed8\") " pod="openstack/memcached-0" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.881161 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e7dfee-d526-45c7-9e86-1a5c2be6f9a8" path="/var/lib/kubelet/pods/42e7dfee-d526-45c7-9e86-1a5c2be6f9a8/volumes" Nov 25 07:00:43 crc kubenswrapper[4482]: I1125 07:00:43.962488 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 07:00:44 crc kubenswrapper[4482]: I1125 07:00:44.123606 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 07:00:45 crc kubenswrapper[4482]: I1125 07:00:45.244011 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 07:00:45 crc kubenswrapper[4482]: I1125 07:00:45.302245 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9bced979-1034-4b28-8059-15a06044eed8","Type":"ContainerStarted","Data":"1d89bae58b937b624bbf50c59faa5d417d8009b96670c382fc71e4065d37195a"} Nov 25 07:00:45 crc kubenswrapper[4482]: I1125 07:00:45.309749 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"685ea58c-3786-479c-bc85-9bd2ebd3d9a7","Type":"ContainerStarted","Data":"4a6cc18d344c88a6b597c35a3ee15f47d7eb5d88e7e5ca625f86ac4c06d7d7d2"} Nov 25 07:00:45 crc kubenswrapper[4482]: I1125 07:00:45.735310 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:00:45 crc kubenswrapper[4482]: I1125 07:00:45.736458 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 07:00:45 crc kubenswrapper[4482]: I1125 07:00:45.745731 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zp9dx" Nov 25 07:00:45 crc kubenswrapper[4482]: I1125 07:00:45.772425 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:00:45 crc kubenswrapper[4482]: I1125 07:00:45.882006 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t44qq\" (UniqueName: \"kubernetes.io/projected/6656629b-3105-4bc0-a292-aa2fa6df9723-kube-api-access-t44qq\") pod \"kube-state-metrics-0\" (UID: \"6656629b-3105-4bc0-a292-aa2fa6df9723\") " pod="openstack/kube-state-metrics-0" Nov 25 07:00:45 crc kubenswrapper[4482]: I1125 07:00:45.984464 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t44qq\" (UniqueName: \"kubernetes.io/projected/6656629b-3105-4bc0-a292-aa2fa6df9723-kube-api-access-t44qq\") pod \"kube-state-metrics-0\" (UID: \"6656629b-3105-4bc0-a292-aa2fa6df9723\") " pod="openstack/kube-state-metrics-0" Nov 25 07:00:46 crc kubenswrapper[4482]: I1125 07:00:46.039367 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t44qq\" (UniqueName: \"kubernetes.io/projected/6656629b-3105-4bc0-a292-aa2fa6df9723-kube-api-access-t44qq\") pod \"kube-state-metrics-0\" (UID: \"6656629b-3105-4bc0-a292-aa2fa6df9723\") " pod="openstack/kube-state-metrics-0" Nov 25 07:00:46 crc kubenswrapper[4482]: I1125 07:00:46.062619 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 07:00:46 crc kubenswrapper[4482]: I1125 07:00:46.598629 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:00:46 crc kubenswrapper[4482]: W1125 07:00:46.613906 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6656629b_3105_4bc0_a292_aa2fa6df9723.slice/crio-dcd4a5874d2490f3f1e953f623bc7bab9fa65e85f9b15c922d9161dd4ddc03e1 WatchSource:0}: Error finding container dcd4a5874d2490f3f1e953f623bc7bab9fa65e85f9b15c922d9161dd4ddc03e1: Status 404 returned error can't find the container with id dcd4a5874d2490f3f1e953f623bc7bab9fa65e85f9b15c922d9161dd4ddc03e1 Nov 25 07:00:47 crc kubenswrapper[4482]: I1125 07:00:47.369897 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6656629b-3105-4bc0-a292-aa2fa6df9723","Type":"ContainerStarted","Data":"dcd4a5874d2490f3f1e953f623bc7bab9fa65e85f9b15c922d9161dd4ddc03e1"} Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.718531 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.720879 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.724681 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.732257 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.736423 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4gkl5" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.736712 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.736870 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.747838 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.797886 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c4pcb"] Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.814137 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-pgdql"] Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.815098 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.815770 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.817716 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.818658 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.818745 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-95dlg" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.879309 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c4pcb"] Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.879534 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pgdql"] Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.890581 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw76m\" (UniqueName: \"kubernetes.io/projected/c2db5853-8834-4085-9d9a-1aeacaf47d4e-kube-api-access-hw76m\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.892954 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2db5853-8834-4085-9d9a-1aeacaf47d4e-config\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.893091 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2db5853-8834-4085-9d9a-1aeacaf47d4e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.893210 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.896419 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2db5853-8834-4085-9d9a-1aeacaf47d4e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.896863 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2db5853-8834-4085-9d9a-1aeacaf47d4e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.896919 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2db5853-8834-4085-9d9a-1aeacaf47d4e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.896985 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2db5853-8834-4085-9d9a-1aeacaf47d4e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.998822 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c929347e-dfc5-409e-8d78-6e888f86a294-var-run\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.998923 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c929347e-dfc5-409e-8d78-6e888f86a294-etc-ovs\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.998972 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9d3e0a-aeb5-4221-a617-71a724c676ed-combined-ca-bundle\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.999018 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2db5853-8834-4085-9d9a-1aeacaf47d4e-config\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.999043 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2db5853-8834-4085-9d9a-1aeacaf47d4e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.999067 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c929347e-dfc5-409e-8d78-6e888f86a294-var-lib\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.999104 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:49 crc kubenswrapper[4482]: I1125 07:00:49.999133 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c929347e-dfc5-409e-8d78-6e888f86a294-var-log\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:49.999677 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2db5853-8834-4085-9d9a-1aeacaf47d4e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:49.999721 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c929347e-dfc5-409e-8d78-6e888f86a294-scripts\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:49.999762 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4vd8\" (UniqueName: \"kubernetes.io/projected/c929347e-dfc5-409e-8d78-6e888f86a294-kube-api-access-b4vd8\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:49.999806 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cb9d3e0a-aeb5-4221-a617-71a724c676ed-var-run\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:49.999845 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb9d3e0a-aeb5-4221-a617-71a724c676ed-scripts\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:49.999912 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2db5853-8834-4085-9d9a-1aeacaf47d4e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:49.999939 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2db5853-8834-4085-9d9a-1aeacaf47d4e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:49.999961 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb9d3e0a-aeb5-4221-a617-71a724c676ed-ovn-controller-tls-certs\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.000001 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chf9j\" (UniqueName: \"kubernetes.io/projected/cb9d3e0a-aeb5-4221-a617-71a724c676ed-kube-api-access-chf9j\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.000034 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2db5853-8834-4085-9d9a-1aeacaf47d4e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.000058 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb9d3e0a-aeb5-4221-a617-71a724c676ed-var-run-ovn\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.000077 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cb9d3e0a-aeb5-4221-a617-71a724c676ed-var-log-ovn\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.000135 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw76m\" (UniqueName: \"kubernetes.io/projected/c2db5853-8834-4085-9d9a-1aeacaf47d4e-kube-api-access-hw76m\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.000473 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.002388 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2db5853-8834-4085-9d9a-1aeacaf47d4e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.004891 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2db5853-8834-4085-9d9a-1aeacaf47d4e-config\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.011962 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2db5853-8834-4085-9d9a-1aeacaf47d4e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.017908 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2db5853-8834-4085-9d9a-1aeacaf47d4e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.028863 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2db5853-8834-4085-9d9a-1aeacaf47d4e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.041614 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw76m\" (UniqueName: \"kubernetes.io/projected/c2db5853-8834-4085-9d9a-1aeacaf47d4e-kube-api-access-hw76m\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.050214 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2db5853-8834-4085-9d9a-1aeacaf47d4e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.050434 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c2db5853-8834-4085-9d9a-1aeacaf47d4e\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.083699 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.101468 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb9d3e0a-aeb5-4221-a617-71a724c676ed-ovn-controller-tls-certs\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.101901 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chf9j\" (UniqueName: \"kubernetes.io/projected/cb9d3e0a-aeb5-4221-a617-71a724c676ed-kube-api-access-chf9j\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.101936 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb9d3e0a-aeb5-4221-a617-71a724c676ed-var-run-ovn\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.102396 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cb9d3e0a-aeb5-4221-a617-71a724c676ed-var-log-ovn\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.102446 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb9d3e0a-aeb5-4221-a617-71a724c676ed-var-run-ovn\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.102493 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c929347e-dfc5-409e-8d78-6e888f86a294-var-run\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.102519 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c929347e-dfc5-409e-8d78-6e888f86a294-etc-ovs\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.102559 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9d3e0a-aeb5-4221-a617-71a724c676ed-combined-ca-bundle\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.102585 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c929347e-dfc5-409e-8d78-6e888f86a294-var-lib\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.102645 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c929347e-dfc5-409e-8d78-6e888f86a294-var-log\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.102669 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c929347e-dfc5-409e-8d78-6e888f86a294-scripts\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.102689 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4vd8\" (UniqueName: \"kubernetes.io/projected/c929347e-dfc5-409e-8d78-6e888f86a294-kube-api-access-b4vd8\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.102721 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cb9d3e0a-aeb5-4221-a617-71a724c676ed-var-run\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.102743 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb9d3e0a-aeb5-4221-a617-71a724c676ed-scripts\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.102969 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c929347e-dfc5-409e-8d78-6e888f86a294-var-lib\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.107118 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cb9d3e0a-aeb5-4221-a617-71a724c676ed-var-run\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.108810 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cb9d3e0a-aeb5-4221-a617-71a724c676ed-var-log-ovn\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.108870 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c929347e-dfc5-409e-8d78-6e888f86a294-var-run\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.109186 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c929347e-dfc5-409e-8d78-6e888f86a294-etc-ovs\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.109962 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c929347e-dfc5-409e-8d78-6e888f86a294-scripts\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.110259 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c929347e-dfc5-409e-8d78-6e888f86a294-var-log\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.114525 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb9d3e0a-aeb5-4221-a617-71a724c676ed-scripts\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.120487 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9d3e0a-aeb5-4221-a617-71a724c676ed-combined-ca-bundle\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.120608 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb9d3e0a-aeb5-4221-a617-71a724c676ed-ovn-controller-tls-certs\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.127246 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4vd8\" (UniqueName: \"kubernetes.io/projected/c929347e-dfc5-409e-8d78-6e888f86a294-kube-api-access-b4vd8\") pod \"ovn-controller-ovs-pgdql\" (UID: \"c929347e-dfc5-409e-8d78-6e888f86a294\") " pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.128863 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chf9j\" (UniqueName: \"kubernetes.io/projected/cb9d3e0a-aeb5-4221-a617-71a724c676ed-kube-api-access-chf9j\") pod \"ovn-controller-c4pcb\" (UID: \"cb9d3e0a-aeb5-4221-a617-71a724c676ed\") " pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.177728 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4pcb" Nov 25 07:00:50 crc kubenswrapper[4482]: I1125 07:00:50.184677 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:00:51 crc kubenswrapper[4482]: I1125 07:00:51.333060 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c4pcb"] Nov 25 07:00:51 crc kubenswrapper[4482]: I1125 07:00:51.412305 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4pcb" event={"ID":"cb9d3e0a-aeb5-4221-a617-71a724c676ed","Type":"ContainerStarted","Data":"683eee01f99862414c54d2615b3b9c63247cd25ca755f07894f03d3462983915"} Nov 25 07:00:51 crc kubenswrapper[4482]: I1125 07:00:51.431260 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 07:00:51 crc kubenswrapper[4482]: I1125 07:00:51.561454 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pgdql"] Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.224371 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4hwhv"] Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.225630 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.232229 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.246153 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4hwhv"] Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.355019 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7ff02009-53ac-4f30-bfdd-f622a1491966-ovs-rundir\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.355083 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhnj6\" (UniqueName: \"kubernetes.io/projected/7ff02009-53ac-4f30-bfdd-f622a1491966-kube-api-access-hhnj6\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.355104 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff02009-53ac-4f30-bfdd-f622a1491966-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.355135 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff02009-53ac-4f30-bfdd-f622a1491966-config\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.355162 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff02009-53ac-4f30-bfdd-f622a1491966-combined-ca-bundle\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.355205 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7ff02009-53ac-4f30-bfdd-f622a1491966-ovn-rundir\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.434761 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6656629b-3105-4bc0-a292-aa2fa6df9723","Type":"ContainerStarted","Data":"85a16ebfb6df2f637a5e283ed484cdd129cd1ea8cbf04733f93cff14a64abd8b"} Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.435992 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.448199 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pgdql" event={"ID":"c929347e-dfc5-409e-8d78-6e888f86a294","Type":"ContainerStarted","Data":"fc9b42fd3af1bd8a794907646a3cc1ecf5678a34ed1fd76c5c4cfad25c7326e7"} Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.453606 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c2db5853-8834-4085-9d9a-1aeacaf47d4e","Type":"ContainerStarted","Data":"d46e84af18cea40d09e5f86da6e83094310da7ffba35930d85434554c2fc437d"} Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.457219 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7ff02009-53ac-4f30-bfdd-f622a1491966-ovs-rundir\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.457302 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhnj6\" (UniqueName: \"kubernetes.io/projected/7ff02009-53ac-4f30-bfdd-f622a1491966-kube-api-access-hhnj6\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.457353 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff02009-53ac-4f30-bfdd-f622a1491966-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.457398 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff02009-53ac-4f30-bfdd-f622a1491966-config\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.457455 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff02009-53ac-4f30-bfdd-f622a1491966-combined-ca-bundle\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.457483 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7ff02009-53ac-4f30-bfdd-f622a1491966-ovn-rundir\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.458009 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7ff02009-53ac-4f30-bfdd-f622a1491966-ovn-rundir\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.458087 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7ff02009-53ac-4f30-bfdd-f622a1491966-ovs-rundir\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.459797 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff02009-53ac-4f30-bfdd-f622a1491966-config\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.469345 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff02009-53ac-4f30-bfdd-f622a1491966-combined-ca-bundle\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.471381 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff02009-53ac-4f30-bfdd-f622a1491966-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.479048 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhnj6\" (UniqueName: \"kubernetes.io/projected/7ff02009-53ac-4f30-bfdd-f622a1491966-kube-api-access-hhnj6\") pod \"ovn-controller-metrics-4hwhv\" (UID: \"7ff02009-53ac-4f30-bfdd-f622a1491966\") " pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.483513 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.36205201 podStartE2EDuration="7.483500146s" podCreationTimestamp="2025-11-25 07:00:45 +0000 UTC" firstStartedPulling="2025-11-25 07:00:46.617078271 +0000 UTC m=+821.105309519" lastFinishedPulling="2025-11-25 07:00:50.738526395 +0000 UTC m=+825.226757655" observedRunningTime="2025-11-25 07:00:52.479551514 +0000 UTC m=+826.967782773" watchObservedRunningTime="2025-11-25 07:00:52.483500146 +0000 UTC m=+826.971731405" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.551716 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4hwhv" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.734748 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848c894d9c-f46fl"] Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.818032 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f696d8f45-ldd8l"] Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.819373 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.821751 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.862157 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f696d8f45-ldd8l"] Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.978338 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-dns-svc\") pod \"dnsmasq-dns-5f696d8f45-ldd8l\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.978605 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5f696d8f45-ldd8l\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.978793 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-config\") pod \"dnsmasq-dns-5f696d8f45-ldd8l\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:52 crc kubenswrapper[4482]: I1125 07:00:52.978936 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j22dk\" (UniqueName: \"kubernetes.io/projected/b1469e22-6c31-480a-aad8-81d8c0def8d5-kube-api-access-j22dk\") pod \"dnsmasq-dns-5f696d8f45-ldd8l\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.081067 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-dns-svc\") pod \"dnsmasq-dns-5f696d8f45-ldd8l\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.081184 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5f696d8f45-ldd8l\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.081239 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-config\") pod \"dnsmasq-dns-5f696d8f45-ldd8l\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.081285 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j22dk\" (UniqueName: \"kubernetes.io/projected/b1469e22-6c31-480a-aad8-81d8c0def8d5-kube-api-access-j22dk\") pod \"dnsmasq-dns-5f696d8f45-ldd8l\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.082296 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5f696d8f45-ldd8l\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.082312 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-config\") pod \"dnsmasq-dns-5f696d8f45-ldd8l\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.083123 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-dns-svc\") pod \"dnsmasq-dns-5f696d8f45-ldd8l\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.095864 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j22dk\" (UniqueName: \"kubernetes.io/projected/b1469e22-6c31-480a-aad8-81d8c0def8d5-kube-api-access-j22dk\") pod \"dnsmasq-dns-5f696d8f45-ldd8l\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.158668 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.306643 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4hwhv"] Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.370956 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.380112 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.384796 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.385181 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.385275 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-p6bnt" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.385348 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.391589 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.490112 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3e573623-eac5-440a-bd83-4849661f85f8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.490204 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e573623-eac5-440a-bd83-4849661f85f8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.490246 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e573623-eac5-440a-bd83-4849661f85f8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.490266 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nts2\" (UniqueName: \"kubernetes.io/projected/3e573623-eac5-440a-bd83-4849661f85f8-kube-api-access-5nts2\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.490319 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e573623-eac5-440a-bd83-4849661f85f8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.490354 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.490387 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e573623-eac5-440a-bd83-4849661f85f8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.490435 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e573623-eac5-440a-bd83-4849661f85f8-config\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.491281 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4hwhv" event={"ID":"7ff02009-53ac-4f30-bfdd-f622a1491966","Type":"ContainerStarted","Data":"fc8a5444c52dea7c229268205bb71541a6c4bed7f72e913eb595a8e0c269c5ff"} Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.564678 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f696d8f45-ldd8l"] Nov 25 07:00:53 crc kubenswrapper[4482]: W1125 07:00:53.583292 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1469e22_6c31_480a_aad8_81d8c0def8d5.slice/crio-12ad2a4c6c888f6f37dd5286a91107f5d73c3ed5cb2c889ce1a831476719f5b4 WatchSource:0}: Error finding container 12ad2a4c6c888f6f37dd5286a91107f5d73c3ed5cb2c889ce1a831476719f5b4: Status 404 returned error can't find the container with id 12ad2a4c6c888f6f37dd5286a91107f5d73c3ed5cb2c889ce1a831476719f5b4 Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.592740 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3e573623-eac5-440a-bd83-4849661f85f8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.592903 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e573623-eac5-440a-bd83-4849661f85f8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.593087 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e573623-eac5-440a-bd83-4849661f85f8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.593647 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nts2\" (UniqueName: \"kubernetes.io/projected/3e573623-eac5-440a-bd83-4849661f85f8-kube-api-access-5nts2\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.593764 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e573623-eac5-440a-bd83-4849661f85f8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.593805 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.593829 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e573623-eac5-440a-bd83-4849661f85f8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.593896 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e573623-eac5-440a-bd83-4849661f85f8-config\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.593139 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3e573623-eac5-440a-bd83-4849661f85f8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.594919 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.597039 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e573623-eac5-440a-bd83-4849661f85f8-config\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.597633 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e573623-eac5-440a-bd83-4849661f85f8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.598718 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e573623-eac5-440a-bd83-4849661f85f8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.598803 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e573623-eac5-440a-bd83-4849661f85f8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.613951 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e573623-eac5-440a-bd83-4849661f85f8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.616508 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nts2\" (UniqueName: \"kubernetes.io/projected/3e573623-eac5-440a-bd83-4849661f85f8-kube-api-access-5nts2\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.618612 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3e573623-eac5-440a-bd83-4849661f85f8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:53 crc kubenswrapper[4482]: I1125 07:00:53.721258 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 07:00:54 crc kubenswrapper[4482]: I1125 07:00:54.228794 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 07:00:54 crc kubenswrapper[4482]: W1125 07:00:54.240558 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e573623_eac5_440a_bd83_4849661f85f8.slice/crio-7e3b829aa299cc9d08d1b337e74bbe6f632dc09cf89810e391c7f63b980d469d WatchSource:0}: Error finding container 7e3b829aa299cc9d08d1b337e74bbe6f632dc09cf89810e391c7f63b980d469d: Status 404 returned error can't find the container with id 7e3b829aa299cc9d08d1b337e74bbe6f632dc09cf89810e391c7f63b980d469d Nov 25 07:00:54 crc kubenswrapper[4482]: I1125 07:00:54.508221 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3e573623-eac5-440a-bd83-4849661f85f8","Type":"ContainerStarted","Data":"7e3b829aa299cc9d08d1b337e74bbe6f632dc09cf89810e391c7f63b980d469d"} Nov 25 07:00:54 crc kubenswrapper[4482]: I1125 07:00:54.521861 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" event={"ID":"b1469e22-6c31-480a-aad8-81d8c0def8d5","Type":"ContainerStarted","Data":"12ad2a4c6c888f6f37dd5286a91107f5d73c3ed5cb2c889ce1a831476719f5b4"} Nov 25 07:00:56 crc kubenswrapper[4482]: I1125 07:00:56.069701 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 07:01:00 crc kubenswrapper[4482]: I1125 07:01:00.581102 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4hwhv" event={"ID":"7ff02009-53ac-4f30-bfdd-f622a1491966","Type":"ContainerStarted","Data":"c629240f96b30bae7cfe48edf5f925a1849ce4fd15ce90d4d9685db35fdb11b2"} Nov 25 07:01:00 crc kubenswrapper[4482]: I1125 07:01:00.620464 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4hwhv" podStartSLOduration=1.891110327 podStartE2EDuration="8.620437654s" podCreationTimestamp="2025-11-25 07:00:52 +0000 UTC" firstStartedPulling="2025-11-25 07:00:53.319359327 +0000 UTC m=+827.807590587" lastFinishedPulling="2025-11-25 07:01:00.048686665 +0000 UTC m=+834.536917914" observedRunningTime="2025-11-25 07:01:00.6057539 +0000 UTC m=+835.093985160" watchObservedRunningTime="2025-11-25 07:01:00.620437654 +0000 UTC m=+835.108668913" Nov 25 07:01:00 crc kubenswrapper[4482]: I1125 07:01:00.961054 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-657d948df5-trc69"] Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:00.997968 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76c8776475-qd28b"] Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:00.999718 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.003201 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.013319 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c8776475-qd28b"] Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.153706 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-ovsdbserver-sb\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.153807 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-ovsdbserver-nb\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.154037 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-config\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.154098 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-dns-svc\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.154131 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpv6n\" (UniqueName: \"kubernetes.io/projected/e224ce8a-f213-4745-8cc0-7d1351065d13-kube-api-access-gpv6n\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.256681 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-config\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.256770 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-dns-svc\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.256812 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpv6n\" (UniqueName: \"kubernetes.io/projected/e224ce8a-f213-4745-8cc0-7d1351065d13-kube-api-access-gpv6n\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.256869 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-ovsdbserver-sb\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.256902 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-ovsdbserver-nb\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.257932 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-dns-svc\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.257951 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-ovsdbserver-sb\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.258374 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-ovsdbserver-nb\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.258501 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-config\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.273509 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpv6n\" (UniqueName: \"kubernetes.io/projected/e224ce8a-f213-4745-8cc0-7d1351065d13-kube-api-access-gpv6n\") pod \"dnsmasq-dns-76c8776475-qd28b\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.334179 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.822826 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c8776475-qd28b"] Nov 25 07:01:01 crc kubenswrapper[4482]: I1125 07:01:01.843118 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:01:02 crc kubenswrapper[4482]: I1125 07:01:02.614448 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c8776475-qd28b" event={"ID":"e224ce8a-f213-4745-8cc0-7d1351065d13","Type":"ContainerStarted","Data":"f37dca9b30685ec0f1fcd93d6dc3bf4f370592d5adbc6b6b0af2ad20084e0284"} Nov 25 07:01:24 crc kubenswrapper[4482]: I1125 07:01:24.843021 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80610219-52d0-4832-9586-5f565148e662","Type":"ContainerStarted","Data":"0396b2915b1de9596b94bd5ccabe4b7d37ef65c00b8c74d279472bd9e3cd96bd"} Nov 25 07:01:24 crc kubenswrapper[4482]: I1125 07:01:24.845928 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9","Type":"ContainerStarted","Data":"576a45274bf50e3a75265bbbdc323fbd6761d5c4a729b3ae5ec93f5160b46948"} Nov 25 07:01:24 crc kubenswrapper[4482]: I1125 07:01:24.847524 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0f200db-f6f1-403b-bad6-85a803b5237c","Type":"ContainerStarted","Data":"5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9"} Nov 25 07:01:27 crc kubenswrapper[4482]: I1125 07:01:27.875564 4482 generic.go:334] "Generic (PLEG): container finished" podID="1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9" containerID="576a45274bf50e3a75265bbbdc323fbd6761d5c4a729b3ae5ec93f5160b46948" exitCode=0 Nov 25 07:01:27 crc kubenswrapper[4482]: I1125 07:01:27.875650 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9","Type":"ContainerDied","Data":"576a45274bf50e3a75265bbbdc323fbd6761d5c4a729b3ae5ec93f5160b46948"} Nov 25 07:01:28 crc kubenswrapper[4482]: I1125 07:01:28.886303 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4pcb" event={"ID":"cb9d3e0a-aeb5-4221-a617-71a724c676ed","Type":"ContainerStarted","Data":"65867e49a5cfcaf8ba940027a5114f6e4466d6c01f0b7d0af5244ce6e3bdb37f"} Nov 25 07:01:28 crc kubenswrapper[4482]: I1125 07:01:28.886620 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-c4pcb" Nov 25 07:01:28 crc kubenswrapper[4482]: I1125 07:01:28.889281 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1bdb1a2d-a0d6-4942-bbe9-acfb76223fc9","Type":"ContainerStarted","Data":"9396db7ba6648bd4693750daa77adbbf6ed5f2ddba1f1f56af60483d654b06da"} Nov 25 07:01:28 crc kubenswrapper[4482]: I1125 07:01:28.927462 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.624032854 podStartE2EDuration="48.927438357s" podCreationTimestamp="2025-11-25 07:00:40 +0000 UTC" firstStartedPulling="2025-11-25 07:00:43.134322283 +0000 UTC m=+817.622553543" lastFinishedPulling="2025-11-25 07:01:21.437727786 +0000 UTC m=+855.925959046" observedRunningTime="2025-11-25 07:01:28.918297045 +0000 UTC m=+863.406528314" watchObservedRunningTime="2025-11-25 07:01:28.927438357 +0000 UTC m=+863.415669606" Nov 25 07:01:28 crc kubenswrapper[4482]: I1125 07:01:28.927626 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-c4pcb" podStartSLOduration=3.585985828 podStartE2EDuration="39.927622785s" podCreationTimestamp="2025-11-25 07:00:49 +0000 UTC" firstStartedPulling="2025-11-25 07:00:51.352617781 +0000 UTC m=+825.840849040" lastFinishedPulling="2025-11-25 07:01:27.694254738 +0000 UTC m=+862.182485997" observedRunningTime="2025-11-25 07:01:28.903662306 +0000 UTC m=+863.391893565" watchObservedRunningTime="2025-11-25 07:01:28.927622785 +0000 UTC m=+863.415854044" Nov 25 07:01:30 crc kubenswrapper[4482]: E1125 07:01:30.469451 4482 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.26.133:39604->192.168.26.133:42749: write tcp 192.168.26.133:39604->192.168.26.133:42749: write: broken pipe Nov 25 07:01:32 crc kubenswrapper[4482]: I1125 07:01:32.337473 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 07:01:32 crc kubenswrapper[4482]: I1125 07:01:32.337778 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 07:01:32 crc kubenswrapper[4482]: I1125 07:01:32.439330 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.005607 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.640725 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-76cd-account-create-6zd5h"] Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.642471 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76cd-account-create-6zd5h" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.646521 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.650922 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7m8kq"] Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.652601 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7m8kq" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.660956 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76cd-account-create-6zd5h"] Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.666841 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7m8kq"] Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.780305 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vtgq\" (UniqueName: \"kubernetes.io/projected/b42ea052-21b5-407f-8d8d-f474f42e92ff-kube-api-access-5vtgq\") pod \"keystone-db-create-7m8kq\" (UID: \"b42ea052-21b5-407f-8d8d-f474f42e92ff\") " pod="openstack/keystone-db-create-7m8kq" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.780375 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f415cc2f-955d-4eef-bca2-2d990fc72f69-operator-scripts\") pod \"keystone-76cd-account-create-6zd5h\" (UID: \"f415cc2f-955d-4eef-bca2-2d990fc72f69\") " pod="openstack/keystone-76cd-account-create-6zd5h" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.780408 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jptzt\" (UniqueName: \"kubernetes.io/projected/f415cc2f-955d-4eef-bca2-2d990fc72f69-kube-api-access-jptzt\") pod \"keystone-76cd-account-create-6zd5h\" (UID: \"f415cc2f-955d-4eef-bca2-2d990fc72f69\") " pod="openstack/keystone-76cd-account-create-6zd5h" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.780566 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b42ea052-21b5-407f-8d8d-f474f42e92ff-operator-scripts\") pod \"keystone-db-create-7m8kq\" (UID: \"b42ea052-21b5-407f-8d8d-f474f42e92ff\") " pod="openstack/keystone-db-create-7m8kq" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.883449 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b42ea052-21b5-407f-8d8d-f474f42e92ff-operator-scripts\") pod \"keystone-db-create-7m8kq\" (UID: \"b42ea052-21b5-407f-8d8d-f474f42e92ff\") " pod="openstack/keystone-db-create-7m8kq" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.889020 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vtgq\" (UniqueName: \"kubernetes.io/projected/b42ea052-21b5-407f-8d8d-f474f42e92ff-kube-api-access-5vtgq\") pod \"keystone-db-create-7m8kq\" (UID: \"b42ea052-21b5-407f-8d8d-f474f42e92ff\") " pod="openstack/keystone-db-create-7m8kq" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.889343 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f415cc2f-955d-4eef-bca2-2d990fc72f69-operator-scripts\") pod \"keystone-76cd-account-create-6zd5h\" (UID: \"f415cc2f-955d-4eef-bca2-2d990fc72f69\") " pod="openstack/keystone-76cd-account-create-6zd5h" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.889398 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jptzt\" (UniqueName: \"kubernetes.io/projected/f415cc2f-955d-4eef-bca2-2d990fc72f69-kube-api-access-jptzt\") pod \"keystone-76cd-account-create-6zd5h\" (UID: \"f415cc2f-955d-4eef-bca2-2d990fc72f69\") " pod="openstack/keystone-76cd-account-create-6zd5h" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.893605 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f415cc2f-955d-4eef-bca2-2d990fc72f69-operator-scripts\") pod \"keystone-76cd-account-create-6zd5h\" (UID: \"f415cc2f-955d-4eef-bca2-2d990fc72f69\") " pod="openstack/keystone-76cd-account-create-6zd5h" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.895102 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b42ea052-21b5-407f-8d8d-f474f42e92ff-operator-scripts\") pod \"keystone-db-create-7m8kq\" (UID: \"b42ea052-21b5-407f-8d8d-f474f42e92ff\") " pod="openstack/keystone-db-create-7m8kq" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.907675 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-d8r5j"] Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.912260 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jptzt\" (UniqueName: \"kubernetes.io/projected/f415cc2f-955d-4eef-bca2-2d990fc72f69-kube-api-access-jptzt\") pod \"keystone-76cd-account-create-6zd5h\" (UID: \"f415cc2f-955d-4eef-bca2-2d990fc72f69\") " pod="openstack/keystone-76cd-account-create-6zd5h" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.914637 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d8r5j" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.918380 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-d8r5j"] Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.923145 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vtgq\" (UniqueName: \"kubernetes.io/projected/b42ea052-21b5-407f-8d8d-f474f42e92ff-kube-api-access-5vtgq\") pod \"keystone-db-create-7m8kq\" (UID: \"b42ea052-21b5-407f-8d8d-f474f42e92ff\") " pod="openstack/keystone-db-create-7m8kq" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.972660 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76cd-account-create-6zd5h" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.988612 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7m8kq" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.991642 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f5da866-34ec-4b01-826a-1f2061eb3fcc-operator-scripts\") pod \"placement-db-create-d8r5j\" (UID: \"9f5da866-34ec-4b01-826a-1f2061eb3fcc\") " pod="openstack/placement-db-create-d8r5j" Nov 25 07:01:33 crc kubenswrapper[4482]: I1125 07:01:33.991700 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn47t\" (UniqueName: \"kubernetes.io/projected/9f5da866-34ec-4b01-826a-1f2061eb3fcc-kube-api-access-wn47t\") pod \"placement-db-create-d8r5j\" (UID: \"9f5da866-34ec-4b01-826a-1f2061eb3fcc\") " pod="openstack/placement-db-create-d8r5j" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.006367 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e764-account-create-492vx"] Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.007865 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e764-account-create-492vx" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.012508 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.015867 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e764-account-create-492vx"] Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.093460 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6da49643-084c-4726-ab3f-d640282105c3-operator-scripts\") pod \"placement-e764-account-create-492vx\" (UID: \"6da49643-084c-4726-ab3f-d640282105c3\") " pod="openstack/placement-e764-account-create-492vx" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.093531 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fslk\" (UniqueName: \"kubernetes.io/projected/6da49643-084c-4726-ab3f-d640282105c3-kube-api-access-5fslk\") pod \"placement-e764-account-create-492vx\" (UID: \"6da49643-084c-4726-ab3f-d640282105c3\") " pod="openstack/placement-e764-account-create-492vx" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.093606 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f5da866-34ec-4b01-826a-1f2061eb3fcc-operator-scripts\") pod \"placement-db-create-d8r5j\" (UID: \"9f5da866-34ec-4b01-826a-1f2061eb3fcc\") " pod="openstack/placement-db-create-d8r5j" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.093634 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn47t\" (UniqueName: \"kubernetes.io/projected/9f5da866-34ec-4b01-826a-1f2061eb3fcc-kube-api-access-wn47t\") pod \"placement-db-create-d8r5j\" (UID: \"9f5da866-34ec-4b01-826a-1f2061eb3fcc\") " pod="openstack/placement-db-create-d8r5j" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.094939 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f5da866-34ec-4b01-826a-1f2061eb3fcc-operator-scripts\") pod \"placement-db-create-d8r5j\" (UID: \"9f5da866-34ec-4b01-826a-1f2061eb3fcc\") " pod="openstack/placement-db-create-d8r5j" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.108525 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn47t\" (UniqueName: \"kubernetes.io/projected/9f5da866-34ec-4b01-826a-1f2061eb3fcc-kube-api-access-wn47t\") pod \"placement-db-create-d8r5j\" (UID: \"9f5da866-34ec-4b01-826a-1f2061eb3fcc\") " pod="openstack/placement-db-create-d8r5j" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.195847 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6da49643-084c-4726-ab3f-d640282105c3-operator-scripts\") pod \"placement-e764-account-create-492vx\" (UID: \"6da49643-084c-4726-ab3f-d640282105c3\") " pod="openstack/placement-e764-account-create-492vx" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.195939 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fslk\" (UniqueName: \"kubernetes.io/projected/6da49643-084c-4726-ab3f-d640282105c3-kube-api-access-5fslk\") pod \"placement-e764-account-create-492vx\" (UID: \"6da49643-084c-4726-ab3f-d640282105c3\") " pod="openstack/placement-e764-account-create-492vx" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.196567 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6da49643-084c-4726-ab3f-d640282105c3-operator-scripts\") pod \"placement-e764-account-create-492vx\" (UID: \"6da49643-084c-4726-ab3f-d640282105c3\") " pod="openstack/placement-e764-account-create-492vx" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.210931 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fslk\" (UniqueName: \"kubernetes.io/projected/6da49643-084c-4726-ab3f-d640282105c3-kube-api-access-5fslk\") pod \"placement-e764-account-create-492vx\" (UID: \"6da49643-084c-4726-ab3f-d640282105c3\") " pod="openstack/placement-e764-account-create-492vx" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.274358 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d8r5j" Nov 25 07:01:34 crc kubenswrapper[4482]: I1125 07:01:34.331131 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e764-account-create-492vx" Nov 25 07:01:37 crc kubenswrapper[4482]: I1125 07:01:37.685307 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7m8kq"] Nov 25 07:01:37 crc kubenswrapper[4482]: W1125 07:01:37.690661 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb42ea052_21b5_407f_8d8d_f474f42e92ff.slice/crio-0f845a9ed932543c4910ee6764ef77fdfc31d1257e5f5934ee2b9b65c04aaf43 WatchSource:0}: Error finding container 0f845a9ed932543c4910ee6764ef77fdfc31d1257e5f5934ee2b9b65c04aaf43: Status 404 returned error can't find the container with id 0f845a9ed932543c4910ee6764ef77fdfc31d1257e5f5934ee2b9b65c04aaf43 Nov 25 07:01:37 crc kubenswrapper[4482]: W1125 07:01:37.806578 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf415cc2f_955d_4eef_bca2_2d990fc72f69.slice/crio-74f2e5b07b17221700eda8329529e96a6393b2b4e1445aec117aa1da404f94df WatchSource:0}: Error finding container 74f2e5b07b17221700eda8329529e96a6393b2b4e1445aec117aa1da404f94df: Status 404 returned error can't find the container with id 74f2e5b07b17221700eda8329529e96a6393b2b4e1445aec117aa1da404f94df Nov 25 07:01:37 crc kubenswrapper[4482]: I1125 07:01:37.823099 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76cd-account-create-6zd5h"] Nov 25 07:01:37 crc kubenswrapper[4482]: I1125 07:01:37.844076 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-d8r5j"] Nov 25 07:01:37 crc kubenswrapper[4482]: I1125 07:01:37.850292 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e764-account-create-492vx"] Nov 25 07:01:37 crc kubenswrapper[4482]: I1125 07:01:37.981858 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e764-account-create-492vx" event={"ID":"6da49643-084c-4726-ab3f-d640282105c3","Type":"ContainerStarted","Data":"b9a7b53463110fdf22852af9a7b4547ac110db6b48255756a863f4d87292d6aa"} Nov 25 07:01:37 crc kubenswrapper[4482]: I1125 07:01:37.988397 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7m8kq" event={"ID":"b42ea052-21b5-407f-8d8d-f474f42e92ff","Type":"ContainerStarted","Data":"99938804327d074506ade0be54e949d16e6d9d49671f0e0fd4f9f20caca1b9a7"} Nov 25 07:01:37 crc kubenswrapper[4482]: I1125 07:01:37.988450 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7m8kq" event={"ID":"b42ea052-21b5-407f-8d8d-f474f42e92ff","Type":"ContainerStarted","Data":"0f845a9ed932543c4910ee6764ef77fdfc31d1257e5f5934ee2b9b65c04aaf43"} Nov 25 07:01:37 crc kubenswrapper[4482]: I1125 07:01:37.994973 4482 generic.go:334] "Generic (PLEG): container finished" podID="b2e203bd-17c2-478b-9682-9e443e72e76d" containerID="955d13bdc847cabf03ec7b00c384e03365f5af36066d8e3910e1be243d0cecd9" exitCode=0 Nov 25 07:01:37 crc kubenswrapper[4482]: I1125 07:01:37.995220 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" event={"ID":"b2e203bd-17c2-478b-9682-9e443e72e76d","Type":"ContainerDied","Data":"955d13bdc847cabf03ec7b00c384e03365f5af36066d8e3910e1be243d0cecd9"} Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.004160 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-7m8kq" podStartSLOduration=5.004149893 podStartE2EDuration="5.004149893s" podCreationTimestamp="2025-11-25 07:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:01:38.000359969 +0000 UTC m=+872.488591228" watchObservedRunningTime="2025-11-25 07:01:38.004149893 +0000 UTC m=+872.492381152" Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.009085 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d8r5j" event={"ID":"9f5da866-34ec-4b01-826a-1f2061eb3fcc","Type":"ContainerStarted","Data":"fbda7f25b49f178f2fbe9151ba6e248d90608770ec9d17b6fe3c7da12a173eb0"} Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.012164 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76cd-account-create-6zd5h" event={"ID":"f415cc2f-955d-4eef-bca2-2d990fc72f69","Type":"ContainerStarted","Data":"74f2e5b07b17221700eda8329529e96a6393b2b4e1445aec117aa1da404f94df"} Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.039065 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-76cd-account-create-6zd5h" podStartSLOduration=5.039045166 podStartE2EDuration="5.039045166s" podCreationTimestamp="2025-11-25 07:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:01:38.03047524 +0000 UTC m=+872.518706499" watchObservedRunningTime="2025-11-25 07:01:38.039045166 +0000 UTC m=+872.527276424" Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.510792 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.535161 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-d8r5j" podStartSLOduration=5.535141448 podStartE2EDuration="5.535141448s" podCreationTimestamp="2025-11-25 07:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:01:38.048720635 +0000 UTC m=+872.536951894" watchObservedRunningTime="2025-11-25 07:01:38.535141448 +0000 UTC m=+873.023372697" Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.699540 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e203bd-17c2-478b-9682-9e443e72e76d-config\") pod \"b2e203bd-17c2-478b-9682-9e443e72e76d\" (UID: \"b2e203bd-17c2-478b-9682-9e443e72e76d\") " Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.699602 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49mnv\" (UniqueName: \"kubernetes.io/projected/b2e203bd-17c2-478b-9682-9e443e72e76d-kube-api-access-49mnv\") pod \"b2e203bd-17c2-478b-9682-9e443e72e76d\" (UID: \"b2e203bd-17c2-478b-9682-9e443e72e76d\") " Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.699669 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2e203bd-17c2-478b-9682-9e443e72e76d-dns-svc\") pod \"b2e203bd-17c2-478b-9682-9e443e72e76d\" (UID: \"b2e203bd-17c2-478b-9682-9e443e72e76d\") " Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.711138 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2e203bd-17c2-478b-9682-9e443e72e76d-kube-api-access-49mnv" (OuterVolumeSpecName: "kube-api-access-49mnv") pod "b2e203bd-17c2-478b-9682-9e443e72e76d" (UID: "b2e203bd-17c2-478b-9682-9e443e72e76d"). InnerVolumeSpecName "kube-api-access-49mnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.727668 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2e203bd-17c2-478b-9682-9e443e72e76d-config" (OuterVolumeSpecName: "config") pod "b2e203bd-17c2-478b-9682-9e443e72e76d" (UID: "b2e203bd-17c2-478b-9682-9e443e72e76d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.745662 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2e203bd-17c2-478b-9682-9e443e72e76d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b2e203bd-17c2-478b-9682-9e443e72e76d" (UID: "b2e203bd-17c2-478b-9682-9e443e72e76d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.802260 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e203bd-17c2-478b-9682-9e443e72e76d-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.802323 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49mnv\" (UniqueName: \"kubernetes.io/projected/b2e203bd-17c2-478b-9682-9e443e72e76d-kube-api-access-49mnv\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:38 crc kubenswrapper[4482]: I1125 07:01:38.802342 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2e203bd-17c2-478b-9682-9e443e72e76d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.047325 4482 generic.go:334] "Generic (PLEG): container finished" podID="6da49643-084c-4726-ab3f-d640282105c3" containerID="b2852e446bd1dbdc835255fdcf70d485fc8d1935bd59836352d4c24d92d2eb4a" exitCode=0 Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.047429 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e764-account-create-492vx" event={"ID":"6da49643-084c-4726-ab3f-d640282105c3","Type":"ContainerDied","Data":"b2852e446bd1dbdc835255fdcf70d485fc8d1935bd59836352d4c24d92d2eb4a"} Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.049380 4482 generic.go:334] "Generic (PLEG): container finished" podID="b42ea052-21b5-407f-8d8d-f474f42e92ff" containerID="99938804327d074506ade0be54e949d16e6d9d49671f0e0fd4f9f20caca1b9a7" exitCode=0 Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.049453 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7m8kq" event={"ID":"b42ea052-21b5-407f-8d8d-f474f42e92ff","Type":"ContainerDied","Data":"99938804327d074506ade0be54e949d16e6d9d49671f0e0fd4f9f20caca1b9a7"} Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.054077 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" event={"ID":"b2e203bd-17c2-478b-9682-9e443e72e76d","Type":"ContainerDied","Data":"6bce75a19bc852ea7572be26bae7e6edf6e129a9b73de1416ff3ba32bc3fded0"} Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.054156 4482 scope.go:117] "RemoveContainer" containerID="955d13bdc847cabf03ec7b00c384e03365f5af36066d8e3910e1be243d0cecd9" Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.054105 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bbd9697cc-25dts" Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.064586 4482 generic.go:334] "Generic (PLEG): container finished" podID="9f5da866-34ec-4b01-826a-1f2061eb3fcc" containerID="0bee1c445376db5e16c48dd26adea1cd6aa36a61033ef86239a31d624dd6e545" exitCode=0 Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.064644 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d8r5j" event={"ID":"9f5da866-34ec-4b01-826a-1f2061eb3fcc","Type":"ContainerDied","Data":"0bee1c445376db5e16c48dd26adea1cd6aa36a61033ef86239a31d624dd6e545"} Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.071574 4482 generic.go:334] "Generic (PLEG): container finished" podID="f415cc2f-955d-4eef-bca2-2d990fc72f69" containerID="324f9f52ab7a4fc32f1b38bcb0e9ee42a28d5c810bd52862a6b02c65fa70f133" exitCode=0 Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.071621 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76cd-account-create-6zd5h" event={"ID":"f415cc2f-955d-4eef-bca2-2d990fc72f69","Type":"ContainerDied","Data":"324f9f52ab7a4fc32f1b38bcb0e9ee42a28d5c810bd52862a6b02c65fa70f133"} Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.117967 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bbd9697cc-25dts"] Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.129690 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bbd9697cc-25dts"] Nov 25 07:01:39 crc kubenswrapper[4482]: I1125 07:01:39.837836 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2e203bd-17c2-478b-9682-9e443e72e76d" path="/var/lib/kubelet/pods/b2e203bd-17c2-478b-9682-9e443e72e76d/volumes" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.082584 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3e573623-eac5-440a-bd83-4849661f85f8","Type":"ContainerStarted","Data":"df4d6aaed1a94deb8529ade2da0e748073917aa11fd83e9703f23bc1fef89d13"} Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.082631 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3e573623-eac5-440a-bd83-4849661f85f8","Type":"ContainerStarted","Data":"d30cc853aca166fca225d42d6712e2773810b4203e526ee935f5f15119b407b9"} Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.111530 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.53825799 podStartE2EDuration="48.111507405s" podCreationTimestamp="2025-11-25 07:00:52 +0000 UTC" firstStartedPulling="2025-11-25 07:00:54.24297788 +0000 UTC m=+828.731209140" lastFinishedPulling="2025-11-25 07:01:38.816227295 +0000 UTC m=+873.304458555" observedRunningTime="2025-11-25 07:01:40.111145743 +0000 UTC m=+874.599377002" watchObservedRunningTime="2025-11-25 07:01:40.111507405 +0000 UTC m=+874.599738664" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.462463 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76cd-account-create-6zd5h" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.518228 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e764-account-create-492vx" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.524504 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d8r5j" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.532374 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7m8kq" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.638713 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6da49643-084c-4726-ab3f-d640282105c3-operator-scripts\") pod \"6da49643-084c-4726-ab3f-d640282105c3\" (UID: \"6da49643-084c-4726-ab3f-d640282105c3\") " Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.638769 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vtgq\" (UniqueName: \"kubernetes.io/projected/b42ea052-21b5-407f-8d8d-f474f42e92ff-kube-api-access-5vtgq\") pod \"b42ea052-21b5-407f-8d8d-f474f42e92ff\" (UID: \"b42ea052-21b5-407f-8d8d-f474f42e92ff\") " Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.638856 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jptzt\" (UniqueName: \"kubernetes.io/projected/f415cc2f-955d-4eef-bca2-2d990fc72f69-kube-api-access-jptzt\") pod \"f415cc2f-955d-4eef-bca2-2d990fc72f69\" (UID: \"f415cc2f-955d-4eef-bca2-2d990fc72f69\") " Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.638940 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f415cc2f-955d-4eef-bca2-2d990fc72f69-operator-scripts\") pod \"f415cc2f-955d-4eef-bca2-2d990fc72f69\" (UID: \"f415cc2f-955d-4eef-bca2-2d990fc72f69\") " Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.638971 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn47t\" (UniqueName: \"kubernetes.io/projected/9f5da866-34ec-4b01-826a-1f2061eb3fcc-kube-api-access-wn47t\") pod \"9f5da866-34ec-4b01-826a-1f2061eb3fcc\" (UID: \"9f5da866-34ec-4b01-826a-1f2061eb3fcc\") " Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.639016 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b42ea052-21b5-407f-8d8d-f474f42e92ff-operator-scripts\") pod \"b42ea052-21b5-407f-8d8d-f474f42e92ff\" (UID: \"b42ea052-21b5-407f-8d8d-f474f42e92ff\") " Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.639031 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f5da866-34ec-4b01-826a-1f2061eb3fcc-operator-scripts\") pod \"9f5da866-34ec-4b01-826a-1f2061eb3fcc\" (UID: \"9f5da866-34ec-4b01-826a-1f2061eb3fcc\") " Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.639069 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fslk\" (UniqueName: \"kubernetes.io/projected/6da49643-084c-4726-ab3f-d640282105c3-kube-api-access-5fslk\") pod \"6da49643-084c-4726-ab3f-d640282105c3\" (UID: \"6da49643-084c-4726-ab3f-d640282105c3\") " Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.639540 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b42ea052-21b5-407f-8d8d-f474f42e92ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b42ea052-21b5-407f-8d8d-f474f42e92ff" (UID: "b42ea052-21b5-407f-8d8d-f474f42e92ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.639621 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f5da866-34ec-4b01-826a-1f2061eb3fcc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f5da866-34ec-4b01-826a-1f2061eb3fcc" (UID: "9f5da866-34ec-4b01-826a-1f2061eb3fcc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.639956 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b42ea052-21b5-407f-8d8d-f474f42e92ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.639992 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f5da866-34ec-4b01-826a-1f2061eb3fcc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.639993 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6da49643-084c-4726-ab3f-d640282105c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6da49643-084c-4726-ab3f-d640282105c3" (UID: "6da49643-084c-4726-ab3f-d640282105c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.640234 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f415cc2f-955d-4eef-bca2-2d990fc72f69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f415cc2f-955d-4eef-bca2-2d990fc72f69" (UID: "f415cc2f-955d-4eef-bca2-2d990fc72f69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.644918 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f415cc2f-955d-4eef-bca2-2d990fc72f69-kube-api-access-jptzt" (OuterVolumeSpecName: "kube-api-access-jptzt") pod "f415cc2f-955d-4eef-bca2-2d990fc72f69" (UID: "f415cc2f-955d-4eef-bca2-2d990fc72f69"). InnerVolumeSpecName "kube-api-access-jptzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.646247 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da49643-084c-4726-ab3f-d640282105c3-kube-api-access-5fslk" (OuterVolumeSpecName: "kube-api-access-5fslk") pod "6da49643-084c-4726-ab3f-d640282105c3" (UID: "6da49643-084c-4726-ab3f-d640282105c3"). InnerVolumeSpecName "kube-api-access-5fslk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.646415 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b42ea052-21b5-407f-8d8d-f474f42e92ff-kube-api-access-5vtgq" (OuterVolumeSpecName: "kube-api-access-5vtgq") pod "b42ea052-21b5-407f-8d8d-f474f42e92ff" (UID: "b42ea052-21b5-407f-8d8d-f474f42e92ff"). InnerVolumeSpecName "kube-api-access-5vtgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.646815 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f5da866-34ec-4b01-826a-1f2061eb3fcc-kube-api-access-wn47t" (OuterVolumeSpecName: "kube-api-access-wn47t") pod "9f5da866-34ec-4b01-826a-1f2061eb3fcc" (UID: "9f5da866-34ec-4b01-826a-1f2061eb3fcc"). InnerVolumeSpecName "kube-api-access-wn47t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.741326 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f415cc2f-955d-4eef-bca2-2d990fc72f69-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.741352 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn47t\" (UniqueName: \"kubernetes.io/projected/9f5da866-34ec-4b01-826a-1f2061eb3fcc-kube-api-access-wn47t\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.741368 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fslk\" (UniqueName: \"kubernetes.io/projected/6da49643-084c-4726-ab3f-d640282105c3-kube-api-access-5fslk\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.741377 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vtgq\" (UniqueName: \"kubernetes.io/projected/b42ea052-21b5-407f-8d8d-f474f42e92ff-kube-api-access-5vtgq\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.741387 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6da49643-084c-4726-ab3f-d640282105c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:40 crc kubenswrapper[4482]: I1125 07:01:40.741396 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jptzt\" (UniqueName: \"kubernetes.io/projected/f415cc2f-955d-4eef-bca2-2d990fc72f69-kube-api-access-jptzt\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.094489 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d8r5j" event={"ID":"9f5da866-34ec-4b01-826a-1f2061eb3fcc","Type":"ContainerDied","Data":"fbda7f25b49f178f2fbe9151ba6e248d90608770ec9d17b6fe3c7da12a173eb0"} Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.094539 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbda7f25b49f178f2fbe9151ba6e248d90608770ec9d17b6fe3c7da12a173eb0" Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.094546 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d8r5j" Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.097053 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76cd-account-create-6zd5h" event={"ID":"f415cc2f-955d-4eef-bca2-2d990fc72f69","Type":"ContainerDied","Data":"74f2e5b07b17221700eda8329529e96a6393b2b4e1445aec117aa1da404f94df"} Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.097103 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74f2e5b07b17221700eda8329529e96a6393b2b4e1445aec117aa1da404f94df" Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.097214 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76cd-account-create-6zd5h" Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.102693 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e764-account-create-492vx" event={"ID":"6da49643-084c-4726-ab3f-d640282105c3","Type":"ContainerDied","Data":"b9a7b53463110fdf22852af9a7b4547ac110db6b48255756a863f4d87292d6aa"} Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.102734 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a7b53463110fdf22852af9a7b4547ac110db6b48255756a863f4d87292d6aa" Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.102795 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e764-account-create-492vx" Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.106191 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7m8kq" Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.106561 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7m8kq" event={"ID":"b42ea052-21b5-407f-8d8d-f474f42e92ff","Type":"ContainerDied","Data":"0f845a9ed932543c4910ee6764ef77fdfc31d1257e5f5934ee2b9b65c04aaf43"} Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.106597 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f845a9ed932543c4910ee6764ef77fdfc31d1257e5f5934ee2b9b65c04aaf43" Nov 25 07:01:41 crc kubenswrapper[4482]: I1125 07:01:41.721761 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 25 07:01:43 crc kubenswrapper[4482]: I1125 07:01:43.722351 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 25 07:01:44 crc kubenswrapper[4482]: I1125 07:01:44.785819 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 25 07:01:44 crc kubenswrapper[4482]: I1125 07:01:44.834856 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 25 07:01:46 crc kubenswrapper[4482]: I1125 07:01:46.155198 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9bced979-1034-4b28-8059-15a06044eed8","Type":"ContainerStarted","Data":"8d6ba7e1114ab8c953e2445cadc7cfc110fb074273cb7bb94b6a7edd847ab142"} Nov 25 07:01:46 crc kubenswrapper[4482]: I1125 07:01:46.155538 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 25 07:01:46 crc kubenswrapper[4482]: I1125 07:01:46.157280 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"685ea58c-3786-479c-bc85-9bd2ebd3d9a7","Type":"ContainerStarted","Data":"5f0f065fe945f76b46435ed6fe7394a03d3620dd793ba38d638902a736aaabd9"} Nov 25 07:01:46 crc kubenswrapper[4482]: I1125 07:01:46.176807 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.9233877660000003 podStartE2EDuration="1m3.176776089s" podCreationTimestamp="2025-11-25 07:00:43 +0000 UTC" firstStartedPulling="2025-11-25 07:00:45.274652638 +0000 UTC m=+819.762883896" lastFinishedPulling="2025-11-25 07:01:45.52804096 +0000 UTC m=+880.016272219" observedRunningTime="2025-11-25 07:01:46.173830957 +0000 UTC m=+880.662062216" watchObservedRunningTime="2025-11-25 07:01:46.176776089 +0000 UTC m=+880.665007349" Nov 25 07:01:49 crc kubenswrapper[4482]: I1125 07:01:49.185219 4482 generic.go:334] "Generic (PLEG): container finished" podID="685ea58c-3786-479c-bc85-9bd2ebd3d9a7" containerID="5f0f065fe945f76b46435ed6fe7394a03d3620dd793ba38d638902a736aaabd9" exitCode=0 Nov 25 07:01:49 crc kubenswrapper[4482]: I1125 07:01:49.185318 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"685ea58c-3786-479c-bc85-9bd2ebd3d9a7","Type":"ContainerDied","Data":"5f0f065fe945f76b46435ed6fe7394a03d3620dd793ba38d638902a736aaabd9"} Nov 25 07:01:50 crc kubenswrapper[4482]: I1125 07:01:50.215289 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"685ea58c-3786-479c-bc85-9bd2ebd3d9a7","Type":"ContainerStarted","Data":"59ff5d52e3d21cb52153b645094318ca60ef5a588b1f12c06b6c8e6fc07734bd"} Nov 25 07:01:50 crc kubenswrapper[4482]: I1125 07:01:50.245808 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.497662592 podStartE2EDuration="1m8.245761832s" podCreationTimestamp="2025-11-25 07:00:42 +0000 UTC" firstStartedPulling="2025-11-25 07:00:44.778987611 +0000 UTC m=+819.267218870" lastFinishedPulling="2025-11-25 07:01:45.527086851 +0000 UTC m=+880.015318110" observedRunningTime="2025-11-25 07:01:50.239430757 +0000 UTC m=+884.727662017" watchObservedRunningTime="2025-11-25 07:01:50.245761832 +0000 UTC m=+884.733993091" Nov 25 07:01:53 crc kubenswrapper[4482]: I1125 07:01:53.459368 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 07:01:53 crc kubenswrapper[4482]: I1125 07:01:53.459871 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 07:01:53 crc kubenswrapper[4482]: I1125 07:01:53.963341 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.149942 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-ddgt4"] Nov 25 07:01:54 crc kubenswrapper[4482]: E1125 07:01:54.150246 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e203bd-17c2-478b-9682-9e443e72e76d" containerName="init" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.150265 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e203bd-17c2-478b-9682-9e443e72e76d" containerName="init" Nov 25 07:01:54 crc kubenswrapper[4482]: E1125 07:01:54.150288 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b42ea052-21b5-407f-8d8d-f474f42e92ff" containerName="mariadb-database-create" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.150294 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="b42ea052-21b5-407f-8d8d-f474f42e92ff" containerName="mariadb-database-create" Nov 25 07:01:54 crc kubenswrapper[4482]: E1125 07:01:54.150303 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f5da866-34ec-4b01-826a-1f2061eb3fcc" containerName="mariadb-database-create" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.150309 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f5da866-34ec-4b01-826a-1f2061eb3fcc" containerName="mariadb-database-create" Nov 25 07:01:54 crc kubenswrapper[4482]: E1125 07:01:54.150315 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da49643-084c-4726-ab3f-d640282105c3" containerName="mariadb-account-create" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.150321 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da49643-084c-4726-ab3f-d640282105c3" containerName="mariadb-account-create" Nov 25 07:01:54 crc kubenswrapper[4482]: E1125 07:01:54.150332 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f415cc2f-955d-4eef-bca2-2d990fc72f69" containerName="mariadb-account-create" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.150337 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f415cc2f-955d-4eef-bca2-2d990fc72f69" containerName="mariadb-account-create" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.150478 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f415cc2f-955d-4eef-bca2-2d990fc72f69" containerName="mariadb-account-create" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.150489 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da49643-084c-4726-ab3f-d640282105c3" containerName="mariadb-account-create" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.150500 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="b42ea052-21b5-407f-8d8d-f474f42e92ff" containerName="mariadb-database-create" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.150506 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f5da866-34ec-4b01-826a-1f2061eb3fcc" containerName="mariadb-database-create" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.150514 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e203bd-17c2-478b-9682-9e443e72e76d" containerName="init" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.150950 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ddgt4" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.152609 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.162905 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ddgt4"] Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.255536 4482 generic.go:334] "Generic (PLEG): container finished" podID="66afc9c3-310f-426e-a54e-3ef9d8888a32" containerID="ca1314a62d9c58eb4b3abb45224ec46fcfdacf54dd250d1c7b1f83db38ec58a0" exitCode=0 Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.255587 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" event={"ID":"66afc9c3-310f-426e-a54e-3ef9d8888a32","Type":"ContainerDied","Data":"ca1314a62d9c58eb4b3abb45224ec46fcfdacf54dd250d1c7b1f83db38ec58a0"} Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.259065 4482 generic.go:334] "Generic (PLEG): container finished" podID="c929347e-dfc5-409e-8d78-6e888f86a294" containerID="6bec18cc2365908a3a4ae378d20736b79d996b63e3835291e7773c53ac627777" exitCode=0 Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.260590 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pgdql" event={"ID":"c929347e-dfc5-409e-8d78-6e888f86a294","Type":"ContainerDied","Data":"6bec18cc2365908a3a4ae378d20736b79d996b63e3835291e7773c53ac627777"} Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.261423 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1193-account-create-2nz49"] Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.264101 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1193-account-create-2nz49" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.267738 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.279241 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1193-account-create-2nz49"] Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.316988 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fntfs\" (UniqueName: \"kubernetes.io/projected/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e-kube-api-access-fntfs\") pod \"glance-db-create-ddgt4\" (UID: \"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e\") " pod="openstack/glance-db-create-ddgt4" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.317155 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e-operator-scripts\") pod \"glance-db-create-ddgt4\" (UID: \"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e\") " pod="openstack/glance-db-create-ddgt4" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.378723 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.418529 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24002027-a259-4705-a0a0-9d2479988e23-operator-scripts\") pod \"glance-1193-account-create-2nz49\" (UID: \"24002027-a259-4705-a0a0-9d2479988e23\") " pod="openstack/glance-1193-account-create-2nz49" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.418604 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwqnf\" (UniqueName: \"kubernetes.io/projected/24002027-a259-4705-a0a0-9d2479988e23-kube-api-access-vwqnf\") pod \"glance-1193-account-create-2nz49\" (UID: \"24002027-a259-4705-a0a0-9d2479988e23\") " pod="openstack/glance-1193-account-create-2nz49" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.418654 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e-operator-scripts\") pod \"glance-db-create-ddgt4\" (UID: \"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e\") " pod="openstack/glance-db-create-ddgt4" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.418723 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fntfs\" (UniqueName: \"kubernetes.io/projected/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e-kube-api-access-fntfs\") pod \"glance-db-create-ddgt4\" (UID: \"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e\") " pod="openstack/glance-db-create-ddgt4" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.421320 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e-operator-scripts\") pod \"glance-db-create-ddgt4\" (UID: \"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e\") " pod="openstack/glance-db-create-ddgt4" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.438748 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fntfs\" (UniqueName: \"kubernetes.io/projected/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e-kube-api-access-fntfs\") pod \"glance-db-create-ddgt4\" (UID: \"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e\") " pod="openstack/glance-db-create-ddgt4" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.462557 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ddgt4" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.522689 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24002027-a259-4705-a0a0-9d2479988e23-operator-scripts\") pod \"glance-1193-account-create-2nz49\" (UID: \"24002027-a259-4705-a0a0-9d2479988e23\") " pod="openstack/glance-1193-account-create-2nz49" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.522761 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwqnf\" (UniqueName: \"kubernetes.io/projected/24002027-a259-4705-a0a0-9d2479988e23-kube-api-access-vwqnf\") pod \"glance-1193-account-create-2nz49\" (UID: \"24002027-a259-4705-a0a0-9d2479988e23\") " pod="openstack/glance-1193-account-create-2nz49" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.523781 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24002027-a259-4705-a0a0-9d2479988e23-operator-scripts\") pod \"glance-1193-account-create-2nz49\" (UID: \"24002027-a259-4705-a0a0-9d2479988e23\") " pod="openstack/glance-1193-account-create-2nz49" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.542679 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwqnf\" (UniqueName: \"kubernetes.io/projected/24002027-a259-4705-a0a0-9d2479988e23-kube-api-access-vwqnf\") pod \"glance-1193-account-create-2nz49\" (UID: \"24002027-a259-4705-a0a0-9d2479988e23\") " pod="openstack/glance-1193-account-create-2nz49" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.577295 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1193-account-create-2nz49" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.601585 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.727966 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw8pf\" (UniqueName: \"kubernetes.io/projected/66afc9c3-310f-426e-a54e-3ef9d8888a32-kube-api-access-mw8pf\") pod \"66afc9c3-310f-426e-a54e-3ef9d8888a32\" (UID: \"66afc9c3-310f-426e-a54e-3ef9d8888a32\") " Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.728264 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66afc9c3-310f-426e-a54e-3ef9d8888a32-config\") pod \"66afc9c3-310f-426e-a54e-3ef9d8888a32\" (UID: \"66afc9c3-310f-426e-a54e-3ef9d8888a32\") " Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.738926 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66afc9c3-310f-426e-a54e-3ef9d8888a32-kube-api-access-mw8pf" (OuterVolumeSpecName: "kube-api-access-mw8pf") pod "66afc9c3-310f-426e-a54e-3ef9d8888a32" (UID: "66afc9c3-310f-426e-a54e-3ef9d8888a32"). InnerVolumeSpecName "kube-api-access-mw8pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:01:54 crc kubenswrapper[4482]: W1125 07:01:54.762044 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc26151e9_5ea6_4cd4_810c_e2d22aef5d7e.slice/crio-d2e5d1599251cd16e97528bfa457c4c6be4f39fbd897f473e78ad229c1d44375 WatchSource:0}: Error finding container d2e5d1599251cd16e97528bfa457c4c6be4f39fbd897f473e78ad229c1d44375: Status 404 returned error can't find the container with id d2e5d1599251cd16e97528bfa457c4c6be4f39fbd897f473e78ad229c1d44375 Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.762334 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66afc9c3-310f-426e-a54e-3ef9d8888a32-config" (OuterVolumeSpecName: "config") pod "66afc9c3-310f-426e-a54e-3ef9d8888a32" (UID: "66afc9c3-310f-426e-a54e-3ef9d8888a32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.785772 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ddgt4"] Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.864189 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw8pf\" (UniqueName: \"kubernetes.io/projected/66afc9c3-310f-426e-a54e-3ef9d8888a32-kube-api-access-mw8pf\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:54 crc kubenswrapper[4482]: I1125 07:01:54.864305 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66afc9c3-310f-426e-a54e-3ef9d8888a32-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.019907 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1193-account-create-2nz49"] Nov 25 07:01:55 crc kubenswrapper[4482]: W1125 07:01:55.049183 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24002027_a259_4705_a0a0_9d2479988e23.slice/crio-d5f109f58a5947550cbe85ba6a667e38c458c3880ab0e1787ac36a755956ae97 WatchSource:0}: Error finding container d5f109f58a5947550cbe85ba6a667e38c458c3880ab0e1787ac36a755956ae97: Status 404 returned error can't find the container with id d5f109f58a5947550cbe85ba6a667e38c458c3880ab0e1787ac36a755956ae97 Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.266395 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1193-account-create-2nz49" event={"ID":"24002027-a259-4705-a0a0-9d2479988e23","Type":"ContainerStarted","Data":"ff6f4281b862446f33ae43f605e4e3423fe0a1dd108c42fbaacf29274300ab62"} Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.266658 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1193-account-create-2nz49" event={"ID":"24002027-a259-4705-a0a0-9d2479988e23","Type":"ContainerStarted","Data":"d5f109f58a5947550cbe85ba6a667e38c458c3880ab0e1787ac36a755956ae97"} Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.268539 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pgdql" event={"ID":"c929347e-dfc5-409e-8d78-6e888f86a294","Type":"ContainerStarted","Data":"cea18c25152f8f61452fb73be31466dd85da9f0f0c759f4a6d0c5e212272fd3f"} Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.268707 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.268789 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pgdql" event={"ID":"c929347e-dfc5-409e-8d78-6e888f86a294","Type":"ContainerStarted","Data":"9b754712a4ed10b309ee867ba8e3ef7c1d26841e67ab1374768beefd7645ac30"} Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.269368 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.270374 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.271281 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59899b64cc-ffbfd" event={"ID":"66afc9c3-310f-426e-a54e-3ef9d8888a32","Type":"ContainerDied","Data":"d985a89b7a09822eab1cbd5f6b3b8b159eb321766b4aee1d54be6fe1816f9cc9"} Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.271408 4482 scope.go:117] "RemoveContainer" containerID="ca1314a62d9c58eb4b3abb45224ec46fcfdacf54dd250d1c7b1f83db38ec58a0" Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.275493 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ddgt4" event={"ID":"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e","Type":"ContainerStarted","Data":"68da05c4f90f4c8b0cd68d362275bcdb4253cd80771e13bccc95ea5c0318ab1f"} Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.275523 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ddgt4" event={"ID":"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e","Type":"ContainerStarted","Data":"d2e5d1599251cd16e97528bfa457c4c6be4f39fbd897f473e78ad229c1d44375"} Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.286693 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-1193-account-create-2nz49" podStartSLOduration=1.2866837850000001 podStartE2EDuration="1.286683785s" podCreationTimestamp="2025-11-25 07:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:01:55.279064882 +0000 UTC m=+889.767296141" watchObservedRunningTime="2025-11-25 07:01:55.286683785 +0000 UTC m=+889.774915034" Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.319290 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-pgdql" podStartSLOduration=4.8412575879999995 podStartE2EDuration="1m6.319267048s" podCreationTimestamp="2025-11-25 07:00:49 +0000 UTC" firstStartedPulling="2025-11-25 07:00:51.570002606 +0000 UTC m=+826.058233865" lastFinishedPulling="2025-11-25 07:01:53.048012067 +0000 UTC m=+887.536243325" observedRunningTime="2025-11-25 07:01:55.31310878 +0000 UTC m=+889.801340039" watchObservedRunningTime="2025-11-25 07:01:55.319267048 +0000 UTC m=+889.807498307" Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.330928 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-ddgt4" podStartSLOduration=1.330909847 podStartE2EDuration="1.330909847s" podCreationTimestamp="2025-11-25 07:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:01:55.329498105 +0000 UTC m=+889.817729364" watchObservedRunningTime="2025-11-25 07:01:55.330909847 +0000 UTC m=+889.819141106" Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.367989 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59899b64cc-ffbfd"] Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.378225 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59899b64cc-ffbfd"] Nov 25 07:01:55 crc kubenswrapper[4482]: E1125 07:01:55.587209 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0f200db_f6f1_403b_bad6_85a803b5237c.slice/crio-conmon-5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9.scope\": RecentStats: unable to find data in memory cache]" Nov 25 07:01:55 crc kubenswrapper[4482]: I1125 07:01:55.843495 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66afc9c3-310f-426e-a54e-3ef9d8888a32" path="/var/lib/kubelet/pods/66afc9c3-310f-426e-a54e-3ef9d8888a32/volumes" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.054873 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f696d8f45-ldd8l"] Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.104490 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9999f46dc-zwcqh"] Nov 25 07:01:56 crc kubenswrapper[4482]: E1125 07:01:56.104852 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66afc9c3-310f-426e-a54e-3ef9d8888a32" containerName="init" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.104872 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="66afc9c3-310f-426e-a54e-3ef9d8888a32" containerName="init" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.105014 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="66afc9c3-310f-426e-a54e-3ef9d8888a32" containerName="init" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.106464 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.128643 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9999f46dc-zwcqh"] Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.193811 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbzhb\" (UniqueName: \"kubernetes.io/projected/27015668-67ef-4c76-9a5d-d32a88a24c03-kube-api-access-bbzhb\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.193853 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-config\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.193872 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-ovsdbserver-nb\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.193905 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-dns-svc\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.193962 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-ovsdbserver-sb\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.284765 4482 generic.go:334] "Generic (PLEG): container finished" podID="c26151e9-5ea6-4cd4-810c-e2d22aef5d7e" containerID="68da05c4f90f4c8b0cd68d362275bcdb4253cd80771e13bccc95ea5c0318ab1f" exitCode=0 Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.284830 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ddgt4" event={"ID":"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e","Type":"ContainerDied","Data":"68da05c4f90f4c8b0cd68d362275bcdb4253cd80771e13bccc95ea5c0318ab1f"} Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.289629 4482 generic.go:334] "Generic (PLEG): container finished" podID="24002027-a259-4705-a0a0-9d2479988e23" containerID="ff6f4281b862446f33ae43f605e4e3423fe0a1dd108c42fbaacf29274300ab62" exitCode=0 Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.289672 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1193-account-create-2nz49" event={"ID":"24002027-a259-4705-a0a0-9d2479988e23","Type":"ContainerDied","Data":"ff6f4281b862446f33ae43f605e4e3423fe0a1dd108c42fbaacf29274300ab62"} Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.291627 4482 generic.go:334] "Generic (PLEG): container finished" podID="80610219-52d0-4832-9586-5f565148e662" containerID="0396b2915b1de9596b94bd5ccabe4b7d37ef65c00b8c74d279472bd9e3cd96bd" exitCode=0 Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.291673 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80610219-52d0-4832-9586-5f565148e662","Type":"ContainerDied","Data":"0396b2915b1de9596b94bd5ccabe4b7d37ef65c00b8c74d279472bd9e3cd96bd"} Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.295001 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbzhb\" (UniqueName: \"kubernetes.io/projected/27015668-67ef-4c76-9a5d-d32a88a24c03-kube-api-access-bbzhb\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.295062 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-config\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.295086 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-ovsdbserver-nb\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.295145 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-dns-svc\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.295281 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-ovsdbserver-sb\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.296083 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-ovsdbserver-sb\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.296117 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-ovsdbserver-nb\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.296157 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-dns-svc\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.297585 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-config\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.301132 4482 generic.go:334] "Generic (PLEG): container finished" podID="e0f200db-f6f1-403b-bad6-85a803b5237c" containerID="5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9" exitCode=0 Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.301283 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0f200db-f6f1-403b-bad6-85a803b5237c","Type":"ContainerDied","Data":"5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9"} Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.361840 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbzhb\" (UniqueName: \"kubernetes.io/projected/27015668-67ef-4c76-9a5d-d32a88a24c03-kube-api-access-bbzhb\") pod \"dnsmasq-dns-9999f46dc-zwcqh\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.423366 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:56 crc kubenswrapper[4482]: I1125 07:01:56.910939 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9999f46dc-zwcqh"] Nov 25 07:01:56 crc kubenswrapper[4482]: W1125 07:01:56.914797 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27015668_67ef_4c76_9a5d_d32a88a24c03.slice/crio-073d075e0b30bc0763c24d244b06ca65be71a5964375839b15e016d2905cb786 WatchSource:0}: Error finding container 073d075e0b30bc0763c24d244b06ca65be71a5964375839b15e016d2905cb786: Status 404 returned error can't find the container with id 073d075e0b30bc0763c24d244b06ca65be71a5964375839b15e016d2905cb786 Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.251996 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.257283 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.263849 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.263955 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-j4g52" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.264006 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.264881 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.311801 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80610219-52d0-4832-9586-5f565148e662","Type":"ContainerStarted","Data":"1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a"} Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.312082 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.313454 4482 generic.go:334] "Generic (PLEG): container finished" podID="685b0725-2c7f-4039-9471-9b596206232d" containerID="2442292fc7de999fd6b3ef2bb036e76cb6f0cd24b8e1ef116093c4c6cbbd0e44" exitCode=0 Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.313532 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848c894d9c-f46fl" event={"ID":"685b0725-2c7f-4039-9471-9b596206232d","Type":"ContainerDied","Data":"2442292fc7de999fd6b3ef2bb036e76cb6f0cd24b8e1ef116093c4c6cbbd0e44"} Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.314789 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" event={"ID":"27015668-67ef-4c76-9a5d-d32a88a24c03","Type":"ContainerStarted","Data":"073d075e0b30bc0763c24d244b06ca65be71a5964375839b15e016d2905cb786"} Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.345291 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0f200db-f6f1-403b-bad6-85a803b5237c","Type":"ContainerStarted","Data":"b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949"} Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.345844 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.432225 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/21d6404f-f801-4230-af65-d110706155c6-cache\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.432314 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tjdz\" (UniqueName: \"kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-kube-api-access-5tjdz\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.432366 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.432517 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.432593 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/21d6404f-f801-4230-af65-d110706155c6-lock\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.473350 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.191364715 podStartE2EDuration="1m18.473339064s" podCreationTimestamp="2025-11-25 07:00:39 +0000 UTC" firstStartedPulling="2025-11-25 07:00:41.155725505 +0000 UTC m=+815.643956764" lastFinishedPulling="2025-11-25 07:01:21.437699854 +0000 UTC m=+855.925931113" observedRunningTime="2025-11-25 07:01:57.468139803 +0000 UTC m=+891.956371062" watchObservedRunningTime="2025-11-25 07:01:57.473339064 +0000 UTC m=+891.961570313" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.487486 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.526616 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.899564124 podStartE2EDuration="1m19.526596011s" podCreationTimestamp="2025-11-25 07:00:38 +0000 UTC" firstStartedPulling="2025-11-25 07:00:40.823892788 +0000 UTC m=+815.312124047" lastFinishedPulling="2025-11-25 07:01:21.450924674 +0000 UTC m=+855.939155934" observedRunningTime="2025-11-25 07:01:57.525959001 +0000 UTC m=+892.014190250" watchObservedRunningTime="2025-11-25 07:01:57.526596011 +0000 UTC m=+892.014827260" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.536125 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/21d6404f-f801-4230-af65-d110706155c6-cache\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.536230 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tjdz\" (UniqueName: \"kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-kube-api-access-5tjdz\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.536257 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.536327 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.536372 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/21d6404f-f801-4230-af65-d110706155c6-lock\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.536815 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/21d6404f-f801-4230-af65-d110706155c6-lock\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.537018 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/21d6404f-f801-4230-af65-d110706155c6-cache\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.537446 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: E1125 07:01:57.537962 4482 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 07:01:57 crc kubenswrapper[4482]: E1125 07:01:57.538003 4482 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 07:01:57 crc kubenswrapper[4482]: E1125 07:01:57.538044 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift podName:21d6404f-f801-4230-af65-d110706155c6 nodeName:}" failed. No retries permitted until 2025-11-25 07:01:58.038030277 +0000 UTC m=+892.526261525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift") pod "swift-storage-0" (UID: "21d6404f-f801-4230-af65-d110706155c6") : configmap "swift-ring-files" not found Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.555779 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tjdz\" (UniqueName: \"kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-kube-api-access-5tjdz\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.564339 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.736779 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.841780 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmh4g\" (UniqueName: \"kubernetes.io/projected/685b0725-2c7f-4039-9471-9b596206232d-kube-api-access-zmh4g\") pod \"685b0725-2c7f-4039-9471-9b596206232d\" (UID: \"685b0725-2c7f-4039-9471-9b596206232d\") " Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.841904 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/685b0725-2c7f-4039-9471-9b596206232d-config\") pod \"685b0725-2c7f-4039-9471-9b596206232d\" (UID: \"685b0725-2c7f-4039-9471-9b596206232d\") " Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.842028 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/685b0725-2c7f-4039-9471-9b596206232d-dns-svc\") pod \"685b0725-2c7f-4039-9471-9b596206232d\" (UID: \"685b0725-2c7f-4039-9471-9b596206232d\") " Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.846784 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/685b0725-2c7f-4039-9471-9b596206232d-kube-api-access-zmh4g" (OuterVolumeSpecName: "kube-api-access-zmh4g") pod "685b0725-2c7f-4039-9471-9b596206232d" (UID: "685b0725-2c7f-4039-9471-9b596206232d"). InnerVolumeSpecName "kube-api-access-zmh4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.858919 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/685b0725-2c7f-4039-9471-9b596206232d-config" (OuterVolumeSpecName: "config") pod "685b0725-2c7f-4039-9471-9b596206232d" (UID: "685b0725-2c7f-4039-9471-9b596206232d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.880573 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/685b0725-2c7f-4039-9471-9b596206232d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "685b0725-2c7f-4039-9471-9b596206232d" (UID: "685b0725-2c7f-4039-9471-9b596206232d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.889992 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1193-account-create-2nz49" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.893821 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ddgt4" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.945719 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24002027-a259-4705-a0a0-9d2479988e23-operator-scripts\") pod \"24002027-a259-4705-a0a0-9d2479988e23\" (UID: \"24002027-a259-4705-a0a0-9d2479988e23\") " Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.945945 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e-operator-scripts\") pod \"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e\" (UID: \"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e\") " Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.945994 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fntfs\" (UniqueName: \"kubernetes.io/projected/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e-kube-api-access-fntfs\") pod \"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e\" (UID: \"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e\") " Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.946037 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwqnf\" (UniqueName: \"kubernetes.io/projected/24002027-a259-4705-a0a0-9d2479988e23-kube-api-access-vwqnf\") pod \"24002027-a259-4705-a0a0-9d2479988e23\" (UID: \"24002027-a259-4705-a0a0-9d2479988e23\") " Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.946528 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/685b0725-2c7f-4039-9471-9b596206232d-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.946539 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/685b0725-2c7f-4039-9471-9b596206232d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.946549 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmh4g\" (UniqueName: \"kubernetes.io/projected/685b0725-2c7f-4039-9471-9b596206232d-kube-api-access-zmh4g\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.947670 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c26151e9-5ea6-4cd4-810c-e2d22aef5d7e" (UID: "c26151e9-5ea6-4cd4-810c-e2d22aef5d7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.948350 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24002027-a259-4705-a0a0-9d2479988e23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24002027-a259-4705-a0a0-9d2479988e23" (UID: "24002027-a259-4705-a0a0-9d2479988e23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.950483 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e-kube-api-access-fntfs" (OuterVolumeSpecName: "kube-api-access-fntfs") pod "c26151e9-5ea6-4cd4-810c-e2d22aef5d7e" (UID: "c26151e9-5ea6-4cd4-810c-e2d22aef5d7e"). InnerVolumeSpecName "kube-api-access-fntfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:01:57 crc kubenswrapper[4482]: I1125 07:01:57.950557 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24002027-a259-4705-a0a0-9d2479988e23-kube-api-access-vwqnf" (OuterVolumeSpecName: "kube-api-access-vwqnf") pod "24002027-a259-4705-a0a0-9d2479988e23" (UID: "24002027-a259-4705-a0a0-9d2479988e23"). InnerVolumeSpecName "kube-api-access-vwqnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.047824 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.047936 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.047949 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fntfs\" (UniqueName: \"kubernetes.io/projected/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e-kube-api-access-fntfs\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.047957 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwqnf\" (UniqueName: \"kubernetes.io/projected/24002027-a259-4705-a0a0-9d2479988e23-kube-api-access-vwqnf\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.047966 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24002027-a259-4705-a0a0-9d2479988e23-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:01:58 crc kubenswrapper[4482]: E1125 07:01:58.048078 4482 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 07:01:58 crc kubenswrapper[4482]: E1125 07:01:58.048090 4482 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 07:01:58 crc kubenswrapper[4482]: E1125 07:01:58.048158 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift podName:21d6404f-f801-4230-af65-d110706155c6 nodeName:}" failed. No retries permitted until 2025-11-25 07:01:59.048143884 +0000 UTC m=+893.536375143 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift") pod "swift-storage-0" (UID: "21d6404f-f801-4230-af65-d110706155c6") : configmap "swift-ring-files" not found Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.355525 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ddgt4" event={"ID":"c26151e9-5ea6-4cd4-810c-e2d22aef5d7e","Type":"ContainerDied","Data":"d2e5d1599251cd16e97528bfa457c4c6be4f39fbd897f473e78ad229c1d44375"} Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.355850 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2e5d1599251cd16e97528bfa457c4c6be4f39fbd897f473e78ad229c1d44375" Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.355903 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ddgt4" Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.365339 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1193-account-create-2nz49" event={"ID":"24002027-a259-4705-a0a0-9d2479988e23","Type":"ContainerDied","Data":"d5f109f58a5947550cbe85ba6a667e38c458c3880ab0e1787ac36a755956ae97"} Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.365357 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5f109f58a5947550cbe85ba6a667e38c458c3880ab0e1787ac36a755956ae97" Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.365393 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1193-account-create-2nz49" Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.369974 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848c894d9c-f46fl" event={"ID":"685b0725-2c7f-4039-9471-9b596206232d","Type":"ContainerDied","Data":"9a37278929948ab3f546a2a3c6fb1aac1eec95ff67edd5a77dfbb30c49713bcf"} Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.370010 4482 scope.go:117] "RemoveContainer" containerID="2442292fc7de999fd6b3ef2bb036e76cb6f0cd24b8e1ef116093c4c6cbbd0e44" Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.370088 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848c894d9c-f46fl" Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.385345 4482 generic.go:334] "Generic (PLEG): container finished" podID="27015668-67ef-4c76-9a5d-d32a88a24c03" containerID="f820cf0f020faef74b1f20c4370b76aa44f61dc719772be371391bc952abeae4" exitCode=0 Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.386015 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" event={"ID":"27015668-67ef-4c76-9a5d-d32a88a24c03","Type":"ContainerDied","Data":"f820cf0f020faef74b1f20c4370b76aa44f61dc719772be371391bc952abeae4"} Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.607944 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848c894d9c-f46fl"] Nov 25 07:01:58 crc kubenswrapper[4482]: I1125 07:01:58.616238 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848c894d9c-f46fl"] Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.087732 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:01:59 crc kubenswrapper[4482]: E1125 07:01:59.087940 4482 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 07:01:59 crc kubenswrapper[4482]: E1125 07:01:59.087955 4482 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 07:01:59 crc kubenswrapper[4482]: E1125 07:01:59.087998 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift podName:21d6404f-f801-4230-af65-d110706155c6 nodeName:}" failed. No retries permitted until 2025-11-25 07:02:01.08798261 +0000 UTC m=+895.576213869 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift") pod "swift-storage-0" (UID: "21d6404f-f801-4230-af65-d110706155c6") : configmap "swift-ring-files" not found Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.396959 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" event={"ID":"27015668-67ef-4c76-9a5d-d32a88a24c03","Type":"ContainerStarted","Data":"d4b52585a05b742925cb717ed472952fd28ef09adaf986fedf1eb9ef552ca217"} Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.397121 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.421085 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" podStartSLOduration=3.421066571 podStartE2EDuration="3.421066571s" podCreationTimestamp="2025-11-25 07:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:01:59.416636439 +0000 UTC m=+893.904867698" watchObservedRunningTime="2025-11-25 07:01:59.421066571 +0000 UTC m=+893.909297830" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.447114 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-z8dgz"] Nov 25 07:01:59 crc kubenswrapper[4482]: E1125 07:01:59.447532 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24002027-a259-4705-a0a0-9d2479988e23" containerName="mariadb-account-create" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.447549 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="24002027-a259-4705-a0a0-9d2479988e23" containerName="mariadb-account-create" Nov 25 07:01:59 crc kubenswrapper[4482]: E1125 07:01:59.447573 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26151e9-5ea6-4cd4-810c-e2d22aef5d7e" containerName="mariadb-database-create" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.447579 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26151e9-5ea6-4cd4-810c-e2d22aef5d7e" containerName="mariadb-database-create" Nov 25 07:01:59 crc kubenswrapper[4482]: E1125 07:01:59.447592 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685b0725-2c7f-4039-9471-9b596206232d" containerName="init" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.447598 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="685b0725-2c7f-4039-9471-9b596206232d" containerName="init" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.447759 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="685b0725-2c7f-4039-9471-9b596206232d" containerName="init" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.447777 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="24002027-a259-4705-a0a0-9d2479988e23" containerName="mariadb-account-create" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.447789 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26151e9-5ea6-4cd4-810c-e2d22aef5d7e" containerName="mariadb-database-create" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.448382 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.451407 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nc9ld" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.456391 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.463249 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-z8dgz"] Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.596936 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gswq\" (UniqueName: \"kubernetes.io/projected/6d25c491-a613-4f52-8cb8-95d689bc3000-kube-api-access-8gswq\") pod \"glance-db-sync-z8dgz\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.597028 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-db-sync-config-data\") pod \"glance-db-sync-z8dgz\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.597248 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-config-data\") pod \"glance-db-sync-z8dgz\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.597456 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-combined-ca-bundle\") pod \"glance-db-sync-z8dgz\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.699986 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-config-data\") pod \"glance-db-sync-z8dgz\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.700121 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-combined-ca-bundle\") pod \"glance-db-sync-z8dgz\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.700484 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gswq\" (UniqueName: \"kubernetes.io/projected/6d25c491-a613-4f52-8cb8-95d689bc3000-kube-api-access-8gswq\") pod \"glance-db-sync-z8dgz\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.700540 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-db-sync-config-data\") pod \"glance-db-sync-z8dgz\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.706911 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-config-data\") pod \"glance-db-sync-z8dgz\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.708101 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-combined-ca-bundle\") pod \"glance-db-sync-z8dgz\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.708730 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-db-sync-config-data\") pod \"glance-db-sync-z8dgz\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.724088 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gswq\" (UniqueName: \"kubernetes.io/projected/6d25c491-a613-4f52-8cb8-95d689bc3000-kube-api-access-8gswq\") pod \"glance-db-sync-z8dgz\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.765730 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z8dgz" Nov 25 07:01:59 crc kubenswrapper[4482]: I1125 07:01:59.843776 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="685b0725-2c7f-4039-9471-9b596206232d" path="/var/lib/kubelet/pods/685b0725-2c7f-4039-9471-9b596206232d/volumes" Nov 25 07:02:00 crc kubenswrapper[4482]: I1125 07:02:00.222846 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c4pcb" podUID="cb9d3e0a-aeb5-4221-a617-71a724c676ed" containerName="ovn-controller" probeResult="failure" output=< Nov 25 07:02:00 crc kubenswrapper[4482]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 07:02:00 crc kubenswrapper[4482]: > Nov 25 07:02:00 crc kubenswrapper[4482]: I1125 07:02:00.285677 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-z8dgz"] Nov 25 07:02:00 crc kubenswrapper[4482]: I1125 07:02:00.407983 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z8dgz" event={"ID":"6d25c491-a613-4f52-8cb8-95d689bc3000","Type":"ContainerStarted","Data":"7e6f10008faf27410904e345dd699b876edc1d0b012aaf3f4007a8cfd625b509"} Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.128333 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:02:01 crc kubenswrapper[4482]: E1125 07:02:01.128574 4482 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 07:02:01 crc kubenswrapper[4482]: E1125 07:02:01.128611 4482 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 07:02:01 crc kubenswrapper[4482]: E1125 07:02:01.128676 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift podName:21d6404f-f801-4230-af65-d110706155c6 nodeName:}" failed. No retries permitted until 2025-11-25 07:02:05.128659148 +0000 UTC m=+899.616890408 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift") pod "swift-storage-0" (UID: "21d6404f-f801-4230-af65-d110706155c6") : configmap "swift-ring-files" not found Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.136774 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9kkwr"] Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.137732 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.139819 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.140291 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.140423 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.160951 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9kkwr"] Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.229752 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-combined-ca-bundle\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.229838 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-scripts\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.229861 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh9sm\" (UniqueName: \"kubernetes.io/projected/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-kube-api-access-wh9sm\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.229895 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-swiftconf\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.230005 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-etc-swift\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.230062 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-dispersionconf\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.230289 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-ring-data-devices\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.331832 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-swiftconf\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.331871 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-etc-swift\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.331901 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-dispersionconf\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.331944 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-ring-data-devices\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.332042 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-combined-ca-bundle\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.332126 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-scripts\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.332154 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh9sm\" (UniqueName: \"kubernetes.io/projected/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-kube-api-access-wh9sm\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.332845 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-ring-data-devices\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.333534 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-scripts\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.333777 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-etc-swift\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.336459 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-combined-ca-bundle\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.338399 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-dispersionconf\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.338569 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-swiftconf\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.358128 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh9sm\" (UniqueName: \"kubernetes.io/projected/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-kube-api-access-wh9sm\") pod \"swift-ring-rebalance-9kkwr\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.453143 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:01 crc kubenswrapper[4482]: I1125 07:02:01.868325 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9kkwr"] Nov 25 07:02:01 crc kubenswrapper[4482]: W1125 07:02:01.877326 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b01a0a7_35fb_425e_a5d0_4ef1c95d87c7.slice/crio-c71d7e10569b7957c2f2e396d61406ad9073d0a8a2ee8ce807cfa7a5845c89e2 WatchSource:0}: Error finding container c71d7e10569b7957c2f2e396d61406ad9073d0a8a2ee8ce807cfa7a5845c89e2: Status 404 returned error can't find the container with id c71d7e10569b7957c2f2e396d61406ad9073d0a8a2ee8ce807cfa7a5845c89e2 Nov 25 07:02:02 crc kubenswrapper[4482]: I1125 07:02:02.430630 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c2db5853-8834-4085-9d9a-1aeacaf47d4e","Type":"ContainerStarted","Data":"1e1b7bb42e054bee7c0d28fc3afef687d5c0b898d7a8402579a53d8c74583004"} Nov 25 07:02:02 crc kubenswrapper[4482]: I1125 07:02:02.430912 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c2db5853-8834-4085-9d9a-1aeacaf47d4e","Type":"ContainerStarted","Data":"76b719baba8cefadbd7ff3dca4b1e6ea537e5dbd49f33950802edfaefa380aa6"} Nov 25 07:02:02 crc kubenswrapper[4482]: I1125 07:02:02.431996 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9kkwr" event={"ID":"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7","Type":"ContainerStarted","Data":"c71d7e10569b7957c2f2e396d61406ad9073d0a8a2ee8ce807cfa7a5845c89e2"} Nov 25 07:02:02 crc kubenswrapper[4482]: I1125 07:02:02.452818 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.692186822 podStartE2EDuration="1m14.452778012s" podCreationTimestamp="2025-11-25 07:00:48 +0000 UTC" firstStartedPulling="2025-11-25 07:00:51.501541895 +0000 UTC m=+825.989773154" lastFinishedPulling="2025-11-25 07:02:01.262133085 +0000 UTC m=+895.750364344" observedRunningTime="2025-11-25 07:02:02.450356467 +0000 UTC m=+896.938587726" watchObservedRunningTime="2025-11-25 07:02:02.452778012 +0000 UTC m=+896.941009271" Nov 25 07:02:05 crc kubenswrapper[4482]: I1125 07:02:05.085296 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 25 07:02:05 crc kubenswrapper[4482]: I1125 07:02:05.085604 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 25 07:02:05 crc kubenswrapper[4482]: I1125 07:02:05.127362 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 25 07:02:05 crc kubenswrapper[4482]: I1125 07:02:05.215691 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:02:05 crc kubenswrapper[4482]: E1125 07:02:05.217545 4482 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 07:02:05 crc kubenswrapper[4482]: E1125 07:02:05.217604 4482 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 07:02:05 crc kubenswrapper[4482]: E1125 07:02:05.217693 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift podName:21d6404f-f801-4230-af65-d110706155c6 nodeName:}" failed. No retries permitted until 2025-11-25 07:02:13.217657137 +0000 UTC m=+907.705888396 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift") pod "swift-storage-0" (UID: "21d6404f-f801-4230-af65-d110706155c6") : configmap "swift-ring-files" not found Nov 25 07:02:05 crc kubenswrapper[4482]: I1125 07:02:05.243743 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c4pcb" podUID="cb9d3e0a-aeb5-4221-a617-71a724c676ed" containerName="ovn-controller" probeResult="failure" output=< Nov 25 07:02:05 crc kubenswrapper[4482]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 07:02:05 crc kubenswrapper[4482]: > Nov 25 07:02:06 crc kubenswrapper[4482]: I1125 07:02:06.425125 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:02:06 crc kubenswrapper[4482]: I1125 07:02:06.472299 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c8776475-qd28b"] Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.518785 4482 generic.go:334] "Generic (PLEG): container finished" podID="8096a59a-651e-416c-99a1-95e4f8ed8f22" containerID="4336d384c312515657d2e2353c236f65510cd34eecc286ba30200b0f52f1decf" exitCode=0 Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.518898 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657d948df5-trc69" event={"ID":"8096a59a-651e-416c-99a1-95e4f8ed8f22","Type":"ContainerDied","Data":"4336d384c312515657d2e2353c236f65510cd34eecc286ba30200b0f52f1decf"} Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.525163 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9kkwr" event={"ID":"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7","Type":"ContainerStarted","Data":"c4fcbff8456d8cb88143cb0db543501597742200ee16408696b647d90fb2a55d"} Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.564599 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-9kkwr" podStartSLOduration=1.979840051 podStartE2EDuration="7.564583795s" podCreationTimestamp="2025-11-25 07:02:01 +0000 UTC" firstStartedPulling="2025-11-25 07:02:01.878948361 +0000 UTC m=+896.367179610" lastFinishedPulling="2025-11-25 07:02:07.463692096 +0000 UTC m=+901.951923354" observedRunningTime="2025-11-25 07:02:08.560494536 +0000 UTC m=+903.048725796" watchObservedRunningTime="2025-11-25 07:02:08.564583795 +0000 UTC m=+903.052815054" Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.832822 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.897446 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9rc5\" (UniqueName: \"kubernetes.io/projected/8096a59a-651e-416c-99a1-95e4f8ed8f22-kube-api-access-t9rc5\") pod \"8096a59a-651e-416c-99a1-95e4f8ed8f22\" (UID: \"8096a59a-651e-416c-99a1-95e4f8ed8f22\") " Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.897505 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8096a59a-651e-416c-99a1-95e4f8ed8f22-dns-svc\") pod \"8096a59a-651e-416c-99a1-95e4f8ed8f22\" (UID: \"8096a59a-651e-416c-99a1-95e4f8ed8f22\") " Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.897577 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8096a59a-651e-416c-99a1-95e4f8ed8f22-config\") pod \"8096a59a-651e-416c-99a1-95e4f8ed8f22\" (UID: \"8096a59a-651e-416c-99a1-95e4f8ed8f22\") " Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.905313 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8096a59a-651e-416c-99a1-95e4f8ed8f22-kube-api-access-t9rc5" (OuterVolumeSpecName: "kube-api-access-t9rc5") pod "8096a59a-651e-416c-99a1-95e4f8ed8f22" (UID: "8096a59a-651e-416c-99a1-95e4f8ed8f22"). InnerVolumeSpecName "kube-api-access-t9rc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.919570 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8096a59a-651e-416c-99a1-95e4f8ed8f22-config" (OuterVolumeSpecName: "config") pod "8096a59a-651e-416c-99a1-95e4f8ed8f22" (UID: "8096a59a-651e-416c-99a1-95e4f8ed8f22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.920531 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8096a59a-651e-416c-99a1-95e4f8ed8f22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8096a59a-651e-416c-99a1-95e4f8ed8f22" (UID: "8096a59a-651e-416c-99a1-95e4f8ed8f22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.999444 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9rc5\" (UniqueName: \"kubernetes.io/projected/8096a59a-651e-416c-99a1-95e4f8ed8f22-kube-api-access-t9rc5\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.999564 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8096a59a-651e-416c-99a1-95e4f8ed8f22-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:08 crc kubenswrapper[4482]: I1125 07:02:08.999620 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8096a59a-651e-416c-99a1-95e4f8ed8f22-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:09 crc kubenswrapper[4482]: I1125 07:02:09.117978 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:02:09 crc kubenswrapper[4482]: I1125 07:02:09.118213 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:02:09 crc kubenswrapper[4482]: I1125 07:02:09.534555 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657d948df5-trc69" event={"ID":"8096a59a-651e-416c-99a1-95e4f8ed8f22","Type":"ContainerDied","Data":"189bbe119da2125c747dd8ac1e19591b195f539c89ea2492cd69927292bce232"} Nov 25 07:02:09 crc kubenswrapper[4482]: I1125 07:02:09.534859 4482 scope.go:117] "RemoveContainer" containerID="4336d384c312515657d2e2353c236f65510cd34eecc286ba30200b0f52f1decf" Nov 25 07:02:09 crc kubenswrapper[4482]: I1125 07:02:09.535080 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657d948df5-trc69" Nov 25 07:02:09 crc kubenswrapper[4482]: I1125 07:02:09.621084 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-657d948df5-trc69"] Nov 25 07:02:09 crc kubenswrapper[4482]: I1125 07:02:09.625622 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-657d948df5-trc69"] Nov 25 07:02:09 crc kubenswrapper[4482]: I1125 07:02:09.840160 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8096a59a-651e-416c-99a1-95e4f8ed8f22" path="/var/lib/kubelet/pods/8096a59a-651e-416c-99a1-95e4f8ed8f22/volumes" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.117435 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.271263 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 25 07:02:10 crc kubenswrapper[4482]: E1125 07:02:10.271906 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8096a59a-651e-416c-99a1-95e4f8ed8f22" containerName="init" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.271996 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="8096a59a-651e-416c-99a1-95e4f8ed8f22" containerName="init" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.272241 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="8096a59a-651e-416c-99a1-95e4f8ed8f22" containerName="init" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.273292 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.275545 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.276857 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.277033 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.277236 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-czjx2" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.280360 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c4pcb" podUID="cb9d3e0a-aeb5-4221-a617-71a724c676ed" containerName="ovn-controller" probeResult="failure" output=< Nov 25 07:02:10 crc kubenswrapper[4482]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 07:02:10 crc kubenswrapper[4482]: > Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.308695 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.324746 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.324789 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.324823 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-scripts\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.324846 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.324867 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-config\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.324882 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2m77\" (UniqueName: \"kubernetes.io/projected/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-kube-api-access-l2m77\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.324927 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.350627 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.429470 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.429829 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.429918 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-scripts\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.429970 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.429998 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-config\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.430034 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2m77\" (UniqueName: \"kubernetes.io/projected/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-kube-api-access-l2m77\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.430132 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.431217 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-scripts\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.433234 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-config\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.433904 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.438119 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.441415 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.452587 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.453655 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2m77\" (UniqueName: \"kubernetes.io/projected/d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b-kube-api-access-l2m77\") pod \"ovn-northd-0\" (UID: \"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b\") " pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.592640 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.656432 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.870542 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-v527q"] Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.877026 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-v527q" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.906595 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-v527q"] Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.919780 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d451-account-create-mjmt4"] Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.921081 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d451-account-create-mjmt4" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.922658 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.934810 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d451-account-create-mjmt4"] Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.987791 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-w6572"] Nov 25 07:02:10 crc kubenswrapper[4482]: I1125 07:02:10.988870 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w6572" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.006294 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w6572"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.015205 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eb6b-account-create-nmg2j"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.017053 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eb6b-account-create-nmg2j" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.019281 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.031341 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eb6b-account-create-nmg2j"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.044087 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd55de78-9d5c-46fa-9289-2ab8dbe482ad-operator-scripts\") pod \"barbican-d451-account-create-mjmt4\" (UID: \"fd55de78-9d5c-46fa-9289-2ab8dbe482ad\") " pod="openstack/barbican-d451-account-create-mjmt4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.044148 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtkcw\" (UniqueName: \"kubernetes.io/projected/0de43686-0d8e-4474-befd-ca1bdefb961d-kube-api-access-dtkcw\") pod \"heat-db-create-v527q\" (UID: \"0de43686-0d8e-4474-befd-ca1bdefb961d\") " pod="openstack/heat-db-create-v527q" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.044229 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0de43686-0d8e-4474-befd-ca1bdefb961d-operator-scripts\") pod \"heat-db-create-v527q\" (UID: \"0de43686-0d8e-4474-befd-ca1bdefb961d\") " pod="openstack/heat-db-create-v527q" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.044264 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t2p8\" (UniqueName: \"kubernetes.io/projected/fd55de78-9d5c-46fa-9289-2ab8dbe482ad-kube-api-access-7t2p8\") pod \"barbican-d451-account-create-mjmt4\" (UID: \"fd55de78-9d5c-46fa-9289-2ab8dbe482ad\") " pod="openstack/barbican-d451-account-create-mjmt4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.075439 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-5sj86"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.076673 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5sj86" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.103641 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5sj86"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.117014 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-849d-account-create-s6d2f"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.118608 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-849d-account-create-s6d2f" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.121280 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.132413 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-849d-account-create-s6d2f"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.145732 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtkcw\" (UniqueName: \"kubernetes.io/projected/0de43686-0d8e-4474-befd-ca1bdefb961d-kube-api-access-dtkcw\") pod \"heat-db-create-v527q\" (UID: \"0de43686-0d8e-4474-befd-ca1bdefb961d\") " pod="openstack/heat-db-create-v527q" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.145776 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35c01d69-7aa7-49af-99f5-465fafbbc191-operator-scripts\") pod \"barbican-db-create-5sj86\" (UID: \"35c01d69-7aa7-49af-99f5-465fafbbc191\") " pod="openstack/barbican-db-create-5sj86" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.145817 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2tt6\" (UniqueName: \"kubernetes.io/projected/35c01d69-7aa7-49af-99f5-465fafbbc191-kube-api-access-l2tt6\") pod \"barbican-db-create-5sj86\" (UID: \"35c01d69-7aa7-49af-99f5-465fafbbc191\") " pod="openstack/barbican-db-create-5sj86" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.145837 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ms9s\" (UniqueName: \"kubernetes.io/projected/479dc11c-3d7f-46f3-a7a4-ea663237c8af-kube-api-access-4ms9s\") pod \"cinder-db-create-w6572\" (UID: \"479dc11c-3d7f-46f3-a7a4-ea663237c8af\") " pod="openstack/cinder-db-create-w6572" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.145872 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgss2\" (UniqueName: \"kubernetes.io/projected/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2-kube-api-access-wgss2\") pod \"cinder-eb6b-account-create-nmg2j\" (UID: \"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2\") " pod="openstack/cinder-eb6b-account-create-nmg2j" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.145903 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0de43686-0d8e-4474-befd-ca1bdefb961d-operator-scripts\") pod \"heat-db-create-v527q\" (UID: \"0de43686-0d8e-4474-befd-ca1bdefb961d\") " pod="openstack/heat-db-create-v527q" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.145927 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t2p8\" (UniqueName: \"kubernetes.io/projected/fd55de78-9d5c-46fa-9289-2ab8dbe482ad-kube-api-access-7t2p8\") pod \"barbican-d451-account-create-mjmt4\" (UID: \"fd55de78-9d5c-46fa-9289-2ab8dbe482ad\") " pod="openstack/barbican-d451-account-create-mjmt4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.145949 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2-operator-scripts\") pod \"cinder-eb6b-account-create-nmg2j\" (UID: \"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2\") " pod="openstack/cinder-eb6b-account-create-nmg2j" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.145977 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479dc11c-3d7f-46f3-a7a4-ea663237c8af-operator-scripts\") pod \"cinder-db-create-w6572\" (UID: \"479dc11c-3d7f-46f3-a7a4-ea663237c8af\") " pod="openstack/cinder-db-create-w6572" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.146000 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd55de78-9d5c-46fa-9289-2ab8dbe482ad-operator-scripts\") pod \"barbican-d451-account-create-mjmt4\" (UID: \"fd55de78-9d5c-46fa-9289-2ab8dbe482ad\") " pod="openstack/barbican-d451-account-create-mjmt4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.146930 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd55de78-9d5c-46fa-9289-2ab8dbe482ad-operator-scripts\") pod \"barbican-d451-account-create-mjmt4\" (UID: \"fd55de78-9d5c-46fa-9289-2ab8dbe482ad\") " pod="openstack/barbican-d451-account-create-mjmt4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.148028 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0de43686-0d8e-4474-befd-ca1bdefb961d-operator-scripts\") pod \"heat-db-create-v527q\" (UID: \"0de43686-0d8e-4474-befd-ca1bdefb961d\") " pod="openstack/heat-db-create-v527q" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.178885 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t2p8\" (UniqueName: \"kubernetes.io/projected/fd55de78-9d5c-46fa-9289-2ab8dbe482ad-kube-api-access-7t2p8\") pod \"barbican-d451-account-create-mjmt4\" (UID: \"fd55de78-9d5c-46fa-9289-2ab8dbe482ad\") " pod="openstack/barbican-d451-account-create-mjmt4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.197673 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtkcw\" (UniqueName: \"kubernetes.io/projected/0de43686-0d8e-4474-befd-ca1bdefb961d-kube-api-access-dtkcw\") pod \"heat-db-create-v527q\" (UID: \"0de43686-0d8e-4474-befd-ca1bdefb961d\") " pod="openstack/heat-db-create-v527q" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.221250 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-v527q" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.247502 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d451-account-create-mjmt4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.248587 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35c01d69-7aa7-49af-99f5-465fafbbc191-operator-scripts\") pod \"barbican-db-create-5sj86\" (UID: \"35c01d69-7aa7-49af-99f5-465fafbbc191\") " pod="openstack/barbican-db-create-5sj86" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.248742 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2tt6\" (UniqueName: \"kubernetes.io/projected/35c01d69-7aa7-49af-99f5-465fafbbc191-kube-api-access-l2tt6\") pod \"barbican-db-create-5sj86\" (UID: \"35c01d69-7aa7-49af-99f5-465fafbbc191\") " pod="openstack/barbican-db-create-5sj86" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.248790 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ms9s\" (UniqueName: \"kubernetes.io/projected/479dc11c-3d7f-46f3-a7a4-ea663237c8af-kube-api-access-4ms9s\") pod \"cinder-db-create-w6572\" (UID: \"479dc11c-3d7f-46f3-a7a4-ea663237c8af\") " pod="openstack/cinder-db-create-w6572" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.248854 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgss2\" (UniqueName: \"kubernetes.io/projected/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2-kube-api-access-wgss2\") pod \"cinder-eb6b-account-create-nmg2j\" (UID: \"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2\") " pod="openstack/cinder-eb6b-account-create-nmg2j" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.248918 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz4mj\" (UniqueName: \"kubernetes.io/projected/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d-kube-api-access-lz4mj\") pod \"heat-849d-account-create-s6d2f\" (UID: \"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d\") " pod="openstack/heat-849d-account-create-s6d2f" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.249012 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2-operator-scripts\") pod \"cinder-eb6b-account-create-nmg2j\" (UID: \"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2\") " pod="openstack/cinder-eb6b-account-create-nmg2j" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.249072 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479dc11c-3d7f-46f3-a7a4-ea663237c8af-operator-scripts\") pod \"cinder-db-create-w6572\" (UID: \"479dc11c-3d7f-46f3-a7a4-ea663237c8af\") " pod="openstack/cinder-db-create-w6572" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.249196 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d-operator-scripts\") pod \"heat-849d-account-create-s6d2f\" (UID: \"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d\") " pod="openstack/heat-849d-account-create-s6d2f" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.250048 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35c01d69-7aa7-49af-99f5-465fafbbc191-operator-scripts\") pod \"barbican-db-create-5sj86\" (UID: \"35c01d69-7aa7-49af-99f5-465fafbbc191\") " pod="openstack/barbican-db-create-5sj86" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.250102 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479dc11c-3d7f-46f3-a7a4-ea663237c8af-operator-scripts\") pod \"cinder-db-create-w6572\" (UID: \"479dc11c-3d7f-46f3-a7a4-ea663237c8af\") " pod="openstack/cinder-db-create-w6572" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.250115 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2-operator-scripts\") pod \"cinder-eb6b-account-create-nmg2j\" (UID: \"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2\") " pod="openstack/cinder-eb6b-account-create-nmg2j" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.285860 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgss2\" (UniqueName: \"kubernetes.io/projected/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2-kube-api-access-wgss2\") pod \"cinder-eb6b-account-create-nmg2j\" (UID: \"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2\") " pod="openstack/cinder-eb6b-account-create-nmg2j" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.285940 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-nlsmj"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.287570 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.293509 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.293712 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.293830 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.295339 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nl4pz" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.311752 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ms9s\" (UniqueName: \"kubernetes.io/projected/479dc11c-3d7f-46f3-a7a4-ea663237c8af-kube-api-access-4ms9s\") pod \"cinder-db-create-w6572\" (UID: \"479dc11c-3d7f-46f3-a7a4-ea663237c8af\") " pod="openstack/cinder-db-create-w6572" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.319189 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2tt6\" (UniqueName: \"kubernetes.io/projected/35c01d69-7aa7-49af-99f5-465fafbbc191-kube-api-access-l2tt6\") pod \"barbican-db-create-5sj86\" (UID: \"35c01d69-7aa7-49af-99f5-465fafbbc191\") " pod="openstack/barbican-db-create-5sj86" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.323019 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.336790 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w6572" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.343553 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eb6b-account-create-nmg2j" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.350906 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz4mj\" (UniqueName: \"kubernetes.io/projected/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d-kube-api-access-lz4mj\") pod \"heat-849d-account-create-s6d2f\" (UID: \"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d\") " pod="openstack/heat-849d-account-create-s6d2f" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.351004 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d-operator-scripts\") pod \"heat-849d-account-create-s6d2f\" (UID: \"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d\") " pod="openstack/heat-849d-account-create-s6d2f" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.351777 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d-operator-scripts\") pod \"heat-849d-account-create-s6d2f\" (UID: \"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d\") " pod="openstack/heat-849d-account-create-s6d2f" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.360823 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nlsmj"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.383874 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz4mj\" (UniqueName: \"kubernetes.io/projected/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d-kube-api-access-lz4mj\") pod \"heat-849d-account-create-s6d2f\" (UID: \"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d\") " pod="openstack/heat-849d-account-create-s6d2f" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.395251 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c8ca-account-create-s9xf9"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.396313 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c8ca-account-create-s9xf9" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.407184 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.410010 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5sj86" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.419383 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c8ca-account-create-s9xf9"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.449629 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-849d-account-create-s6d2f" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.452337 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d08539-2898-4d05-af16-1dd533f1720d-combined-ca-bundle\") pod \"keystone-db-sync-nlsmj\" (UID: \"a3d08539-2898-4d05-af16-1dd533f1720d\") " pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.453269 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d08539-2898-4d05-af16-1dd533f1720d-config-data\") pod \"keystone-db-sync-nlsmj\" (UID: \"a3d08539-2898-4d05-af16-1dd533f1720d\") " pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.453401 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bl2g\" (UniqueName: \"kubernetes.io/projected/a3d08539-2898-4d05-af16-1dd533f1720d-kube-api-access-8bl2g\") pod \"keystone-db-sync-nlsmj\" (UID: \"a3d08539-2898-4d05-af16-1dd533f1720d\") " pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.457885 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-c5mm4"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.459001 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c5mm4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.469519 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-c5mm4"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.557474 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb2d8\" (UniqueName: \"kubernetes.io/projected/cbcc64ec-1a64-403b-be72-d33bb30e5385-kube-api-access-gb2d8\") pod \"neutron-db-create-c5mm4\" (UID: \"cbcc64ec-1a64-403b-be72-d33bb30e5385\") " pod="openstack/neutron-db-create-c5mm4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.557540 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d08539-2898-4d05-af16-1dd533f1720d-combined-ca-bundle\") pod \"keystone-db-sync-nlsmj\" (UID: \"a3d08539-2898-4d05-af16-1dd533f1720d\") " pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.557573 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbcc64ec-1a64-403b-be72-d33bb30e5385-operator-scripts\") pod \"neutron-db-create-c5mm4\" (UID: \"cbcc64ec-1a64-403b-be72-d33bb30e5385\") " pod="openstack/neutron-db-create-c5mm4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.557626 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead-operator-scripts\") pod \"neutron-c8ca-account-create-s9xf9\" (UID: \"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead\") " pod="openstack/neutron-c8ca-account-create-s9xf9" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.557750 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d08539-2898-4d05-af16-1dd533f1720d-config-data\") pod \"keystone-db-sync-nlsmj\" (UID: \"a3d08539-2898-4d05-af16-1dd533f1720d\") " pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.557817 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bl2g\" (UniqueName: \"kubernetes.io/projected/a3d08539-2898-4d05-af16-1dd533f1720d-kube-api-access-8bl2g\") pod \"keystone-db-sync-nlsmj\" (UID: \"a3d08539-2898-4d05-af16-1dd533f1720d\") " pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.557855 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrtz8\" (UniqueName: \"kubernetes.io/projected/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead-kube-api-access-zrtz8\") pod \"neutron-c8ca-account-create-s9xf9\" (UID: \"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead\") " pod="openstack/neutron-c8ca-account-create-s9xf9" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.574745 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d08539-2898-4d05-af16-1dd533f1720d-config-data\") pod \"keystone-db-sync-nlsmj\" (UID: \"a3d08539-2898-4d05-af16-1dd533f1720d\") " pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.585711 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d08539-2898-4d05-af16-1dd533f1720d-combined-ca-bundle\") pod \"keystone-db-sync-nlsmj\" (UID: \"a3d08539-2898-4d05-af16-1dd533f1720d\") " pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.590639 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bl2g\" (UniqueName: \"kubernetes.io/projected/a3d08539-2898-4d05-af16-1dd533f1720d-kube-api-access-8bl2g\") pod \"keystone-db-sync-nlsmj\" (UID: \"a3d08539-2898-4d05-af16-1dd533f1720d\") " pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.596581 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b","Type":"ContainerStarted","Data":"bd351c37eba9e93e210af9098529e7aedbf7882f4391b3e06ff95db182c0dc4d"} Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.635986 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.663149 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrtz8\" (UniqueName: \"kubernetes.io/projected/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead-kube-api-access-zrtz8\") pod \"neutron-c8ca-account-create-s9xf9\" (UID: \"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead\") " pod="openstack/neutron-c8ca-account-create-s9xf9" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.663271 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb2d8\" (UniqueName: \"kubernetes.io/projected/cbcc64ec-1a64-403b-be72-d33bb30e5385-kube-api-access-gb2d8\") pod \"neutron-db-create-c5mm4\" (UID: \"cbcc64ec-1a64-403b-be72-d33bb30e5385\") " pod="openstack/neutron-db-create-c5mm4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.663295 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbcc64ec-1a64-403b-be72-d33bb30e5385-operator-scripts\") pod \"neutron-db-create-c5mm4\" (UID: \"cbcc64ec-1a64-403b-be72-d33bb30e5385\") " pod="openstack/neutron-db-create-c5mm4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.663319 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead-operator-scripts\") pod \"neutron-c8ca-account-create-s9xf9\" (UID: \"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead\") " pod="openstack/neutron-c8ca-account-create-s9xf9" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.663975 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead-operator-scripts\") pod \"neutron-c8ca-account-create-s9xf9\" (UID: \"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead\") " pod="openstack/neutron-c8ca-account-create-s9xf9" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.664815 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbcc64ec-1a64-403b-be72-d33bb30e5385-operator-scripts\") pod \"neutron-db-create-c5mm4\" (UID: \"cbcc64ec-1a64-403b-be72-d33bb30e5385\") " pod="openstack/neutron-db-create-c5mm4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.698491 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb2d8\" (UniqueName: \"kubernetes.io/projected/cbcc64ec-1a64-403b-be72-d33bb30e5385-kube-api-access-gb2d8\") pod \"neutron-db-create-c5mm4\" (UID: \"cbcc64ec-1a64-403b-be72-d33bb30e5385\") " pod="openstack/neutron-db-create-c5mm4" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.700934 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrtz8\" (UniqueName: \"kubernetes.io/projected/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead-kube-api-access-zrtz8\") pod \"neutron-c8ca-account-create-s9xf9\" (UID: \"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead\") " pod="openstack/neutron-c8ca-account-create-s9xf9" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.757385 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-v527q"] Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.815466 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c8ca-account-create-s9xf9" Nov 25 07:02:11 crc kubenswrapper[4482]: I1125 07:02:11.858716 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c5mm4" Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.023237 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d451-account-create-mjmt4"] Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.179655 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5sj86"] Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.213795 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w6572"] Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.259666 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nlsmj"] Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.360604 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-849d-account-create-s6d2f"] Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.476460 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eb6b-account-create-nmg2j"] Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.507344 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-c5mm4"] Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.536287 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c8ca-account-create-s9xf9"] Nov 25 07:02:12 crc kubenswrapper[4482]: W1125 07:02:12.536967 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbcc64ec_1a64_403b_be72_d33bb30e5385.slice/crio-9c902b9d14280c5eee91977589b8c0001f77973d816b95c9f8a08b56ba1be1cf WatchSource:0}: Error finding container 9c902b9d14280c5eee91977589b8c0001f77973d816b95c9f8a08b56ba1be1cf: Status 404 returned error can't find the container with id 9c902b9d14280c5eee91977589b8c0001f77973d816b95c9f8a08b56ba1be1cf Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.622966 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w6572" event={"ID":"479dc11c-3d7f-46f3-a7a4-ea663237c8af","Type":"ContainerStarted","Data":"3cd180bfc22ceab0d57321b09ee69451a401d15aa9b7238ff84d7f29f3af579c"} Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.626636 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-849d-account-create-s6d2f" event={"ID":"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d","Type":"ContainerStarted","Data":"8d3020e9196c67a03edfae8ab71a088ef67552ccb7f11b0aa33be32dd2484fc0"} Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.638548 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c8ca-account-create-s9xf9" event={"ID":"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead","Type":"ContainerStarted","Data":"6a2e991040fd7cef4c058f80662155cc23eeb574bad39068585bd1c28afc9507"} Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.642461 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5sj86" event={"ID":"35c01d69-7aa7-49af-99f5-465fafbbc191","Type":"ContainerStarted","Data":"da0ac6464aa7fcffb4f30bbf86d0ac45931802905c6b45d54b15461cf17ba803"} Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.648533 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eb6b-account-create-nmg2j" event={"ID":"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2","Type":"ContainerStarted","Data":"f4fda8b70200236255b80bbc3b1f78a03c255ef4741fcc207cfc82550e97d43d"} Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.654382 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-v527q" event={"ID":"0de43686-0d8e-4474-befd-ca1bdefb961d","Type":"ContainerStarted","Data":"ea794e92f85452ae0d911c240dcdf5367b4501e67cb6783becb64cff608c5494"} Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.654428 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-v527q" event={"ID":"0de43686-0d8e-4474-befd-ca1bdefb961d","Type":"ContainerStarted","Data":"3d43189c9517a3e22e58e10b525a98c01948570d979982c7b5e8aca9cb2ad5ab"} Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.659424 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c5mm4" event={"ID":"cbcc64ec-1a64-403b-be72-d33bb30e5385","Type":"ContainerStarted","Data":"9c902b9d14280c5eee91977589b8c0001f77973d816b95c9f8a08b56ba1be1cf"} Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.674727 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nlsmj" event={"ID":"a3d08539-2898-4d05-af16-1dd533f1720d","Type":"ContainerStarted","Data":"018a61b90dd74b8d4a66aee9bc1a45a9b2284c9c94fa8e180f66777f9260f6d7"} Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.678671 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-v527q" podStartSLOduration=2.678662044 podStartE2EDuration="2.678662044s" podCreationTimestamp="2025-11-25 07:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:02:12.67432611 +0000 UTC m=+907.162557369" watchObservedRunningTime="2025-11-25 07:02:12.678662044 +0000 UTC m=+907.166893303" Nov 25 07:02:12 crc kubenswrapper[4482]: I1125 07:02:12.679258 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d451-account-create-mjmt4" event={"ID":"fd55de78-9d5c-46fa-9289-2ab8dbe482ad","Type":"ContainerStarted","Data":"f546157e456fbad0b00aa849b7904721de4137601e1eb2de5adcf51c0c5a61e8"} Nov 25 07:02:13 crc kubenswrapper[4482]: I1125 07:02:13.221537 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:02:13 crc kubenswrapper[4482]: E1125 07:02:13.222224 4482 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 07:02:13 crc kubenswrapper[4482]: E1125 07:02:13.222255 4482 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 07:02:13 crc kubenswrapper[4482]: E1125 07:02:13.222336 4482 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift podName:21d6404f-f801-4230-af65-d110706155c6 nodeName:}" failed. No retries permitted until 2025-11-25 07:02:29.222311631 +0000 UTC m=+923.710542890 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift") pod "swift-storage-0" (UID: "21d6404f-f801-4230-af65-d110706155c6") : configmap "swift-ring-files" not found Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.745999 4482 generic.go:334] "Generic (PLEG): container finished" podID="479dc11c-3d7f-46f3-a7a4-ea663237c8af" containerID="f0890ca3f20afb75dd4d01538548a5f142c752342b6c1d02b5ea3990b1b7ebc0" exitCode=0 Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.746384 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w6572" event={"ID":"479dc11c-3d7f-46f3-a7a4-ea663237c8af","Type":"ContainerDied","Data":"f0890ca3f20afb75dd4d01538548a5f142c752342b6c1d02b5ea3990b1b7ebc0"} Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.763612 4482 generic.go:334] "Generic (PLEG): container finished" podID="ab09a06a-9cbb-420a-b456-1aa12e0bd0e2" containerID="4b18aa953c1b8458a0a0f0a0fff79c0504846ace794a7a0610b1d5db9b8e8a48" exitCode=0 Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.763701 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eb6b-account-create-nmg2j" event={"ID":"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2","Type":"ContainerDied","Data":"4b18aa953c1b8458a0a0f0a0fff79c0504846ace794a7a0610b1d5db9b8e8a48"} Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.768149 4482 generic.go:334] "Generic (PLEG): container finished" podID="0de43686-0d8e-4474-befd-ca1bdefb961d" containerID="ea794e92f85452ae0d911c240dcdf5367b4501e67cb6783becb64cff608c5494" exitCode=0 Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.768237 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-v527q" event={"ID":"0de43686-0d8e-4474-befd-ca1bdefb961d","Type":"ContainerDied","Data":"ea794e92f85452ae0d911c240dcdf5367b4501e67cb6783becb64cff608c5494"} Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.769621 4482 generic.go:334] "Generic (PLEG): container finished" podID="435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d" containerID="65b5914bea150d66430d140c56b8764b9179acef8664592b80979c195012ef15" exitCode=0 Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.769697 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-849d-account-create-s6d2f" event={"ID":"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d","Type":"ContainerDied","Data":"65b5914bea150d66430d140c56b8764b9179acef8664592b80979c195012ef15"} Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.771231 4482 generic.go:334] "Generic (PLEG): container finished" podID="cbcc64ec-1a64-403b-be72-d33bb30e5385" containerID="95785a714c9130ac63897f82ac42470eefab3926496e55045a301e9fc5f71f2e" exitCode=0 Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.771304 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c5mm4" event={"ID":"cbcc64ec-1a64-403b-be72-d33bb30e5385","Type":"ContainerDied","Data":"95785a714c9130ac63897f82ac42470eefab3926496e55045a301e9fc5f71f2e"} Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.805012 4482 generic.go:334] "Generic (PLEG): container finished" podID="4804a1ca-dd11-42f7-913d-4b3c1bdb7ead" containerID="9964604583b93a9fbf942889db321001f39fff096f50da8420490f6b27cb4c5d" exitCode=0 Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.805239 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c8ca-account-create-s9xf9" event={"ID":"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead","Type":"ContainerDied","Data":"9964604583b93a9fbf942889db321001f39fff096f50da8420490f6b27cb4c5d"} Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.820104 4482 generic.go:334] "Generic (PLEG): container finished" podID="fd55de78-9d5c-46fa-9289-2ab8dbe482ad" containerID="3824631206b39c6605fd32c79e27c515c92bd15cded7a2d8667ebf3ddfbb6fb3" exitCode=0 Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.820227 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d451-account-create-mjmt4" event={"ID":"fd55de78-9d5c-46fa-9289-2ab8dbe482ad","Type":"ContainerDied","Data":"3824631206b39c6605fd32c79e27c515c92bd15cded7a2d8667ebf3ddfbb6fb3"} Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.827728 4482 generic.go:334] "Generic (PLEG): container finished" podID="35c01d69-7aa7-49af-99f5-465fafbbc191" containerID="e4e4656b2f5b2dfb9503d31ac45b87579f1a819c7c11e64ca1946598cd11703f" exitCode=0 Nov 25 07:02:14 crc kubenswrapper[4482]: I1125 07:02:14.827784 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5sj86" event={"ID":"35c01d69-7aa7-49af-99f5-465fafbbc191","Type":"ContainerDied","Data":"e4e4656b2f5b2dfb9503d31ac45b87579f1a819c7c11e64ca1946598cd11703f"} Nov 25 07:02:15 crc kubenswrapper[4482]: I1125 07:02:15.228035 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c4pcb" podUID="cb9d3e0a-aeb5-4221-a617-71a724c676ed" containerName="ovn-controller" probeResult="failure" output=< Nov 25 07:02:15 crc kubenswrapper[4482]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 07:02:15 crc kubenswrapper[4482]: > Nov 25 07:02:15 crc kubenswrapper[4482]: I1125 07:02:15.848717 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b","Type":"ContainerStarted","Data":"d7f01518f298e4b3f78b362936929b389cdf5d6c60fc1b1f6a9b59b1a9631ba8"} Nov 25 07:02:15 crc kubenswrapper[4482]: I1125 07:02:15.848978 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d3c6ce77-0001-47c3-92f3-5bae0c6d9a8b","Type":"ContainerStarted","Data":"66dbb03963be2c8600d4da8e74f9cfaa6bc72414bda2e764b30826df3c5b575d"} Nov 25 07:02:15 crc kubenswrapper[4482]: I1125 07:02:15.849000 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 25 07:02:15 crc kubenswrapper[4482]: I1125 07:02:15.966758 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.898033225 podStartE2EDuration="5.966740434s" podCreationTimestamp="2025-11-25 07:02:10 +0000 UTC" firstStartedPulling="2025-11-25 07:02:11.360292749 +0000 UTC m=+905.848524007" lastFinishedPulling="2025-11-25 07:02:15.428999956 +0000 UTC m=+909.917231216" observedRunningTime="2025-11-25 07:02:15.961022375 +0000 UTC m=+910.449253634" watchObservedRunningTime="2025-11-25 07:02:15.966740434 +0000 UTC m=+910.454971684" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.219190 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-v527q" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.291805 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtkcw\" (UniqueName: \"kubernetes.io/projected/0de43686-0d8e-4474-befd-ca1bdefb961d-kube-api-access-dtkcw\") pod \"0de43686-0d8e-4474-befd-ca1bdefb961d\" (UID: \"0de43686-0d8e-4474-befd-ca1bdefb961d\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.291857 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0de43686-0d8e-4474-befd-ca1bdefb961d-operator-scripts\") pod \"0de43686-0d8e-4474-befd-ca1bdefb961d\" (UID: \"0de43686-0d8e-4474-befd-ca1bdefb961d\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.292955 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0de43686-0d8e-4474-befd-ca1bdefb961d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0de43686-0d8e-4474-befd-ca1bdefb961d" (UID: "0de43686-0d8e-4474-befd-ca1bdefb961d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.302009 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de43686-0d8e-4474-befd-ca1bdefb961d-kube-api-access-dtkcw" (OuterVolumeSpecName: "kube-api-access-dtkcw") pod "0de43686-0d8e-4474-befd-ca1bdefb961d" (UID: "0de43686-0d8e-4474-befd-ca1bdefb961d"). InnerVolumeSpecName "kube-api-access-dtkcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.398252 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtkcw\" (UniqueName: \"kubernetes.io/projected/0de43686-0d8e-4474-befd-ca1bdefb961d-kube-api-access-dtkcw\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.398283 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0de43686-0d8e-4474-befd-ca1bdefb961d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.467798 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5sj86" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.581375 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d451-account-create-mjmt4" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.610769 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35c01d69-7aa7-49af-99f5-465fafbbc191-operator-scripts\") pod \"35c01d69-7aa7-49af-99f5-465fafbbc191\" (UID: \"35c01d69-7aa7-49af-99f5-465fafbbc191\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.610877 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2tt6\" (UniqueName: \"kubernetes.io/projected/35c01d69-7aa7-49af-99f5-465fafbbc191-kube-api-access-l2tt6\") pod \"35c01d69-7aa7-49af-99f5-465fafbbc191\" (UID: \"35c01d69-7aa7-49af-99f5-465fafbbc191\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.614889 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35c01d69-7aa7-49af-99f5-465fafbbc191-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35c01d69-7aa7-49af-99f5-465fafbbc191" (UID: "35c01d69-7aa7-49af-99f5-465fafbbc191"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.615835 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35c01d69-7aa7-49af-99f5-465fafbbc191-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.621377 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c01d69-7aa7-49af-99f5-465fafbbc191-kube-api-access-l2tt6" (OuterVolumeSpecName: "kube-api-access-l2tt6") pod "35c01d69-7aa7-49af-99f5-465fafbbc191" (UID: "35c01d69-7aa7-49af-99f5-465fafbbc191"). InnerVolumeSpecName "kube-api-access-l2tt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.650810 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eb6b-account-create-nmg2j" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.659375 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c8ca-account-create-s9xf9" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.659740 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w6572" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.674200 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-849d-account-create-s6d2f" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.686792 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c5mm4" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.717025 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead-operator-scripts\") pod \"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead\" (UID: \"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.717123 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrtz8\" (UniqueName: \"kubernetes.io/projected/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead-kube-api-access-zrtz8\") pod \"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead\" (UID: \"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.717221 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2-operator-scripts\") pod \"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2\" (UID: \"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.717299 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz4mj\" (UniqueName: \"kubernetes.io/projected/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d-kube-api-access-lz4mj\") pod \"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d\" (UID: \"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.717349 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgss2\" (UniqueName: \"kubernetes.io/projected/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2-kube-api-access-wgss2\") pod \"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2\" (UID: \"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.717364 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d-operator-scripts\") pod \"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d\" (UID: \"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.717424 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd55de78-9d5c-46fa-9289-2ab8dbe482ad-operator-scripts\") pod \"fd55de78-9d5c-46fa-9289-2ab8dbe482ad\" (UID: \"fd55de78-9d5c-46fa-9289-2ab8dbe482ad\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.717445 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ms9s\" (UniqueName: \"kubernetes.io/projected/479dc11c-3d7f-46f3-a7a4-ea663237c8af-kube-api-access-4ms9s\") pod \"479dc11c-3d7f-46f3-a7a4-ea663237c8af\" (UID: \"479dc11c-3d7f-46f3-a7a4-ea663237c8af\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.717503 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479dc11c-3d7f-46f3-a7a4-ea663237c8af-operator-scripts\") pod \"479dc11c-3d7f-46f3-a7a4-ea663237c8af\" (UID: \"479dc11c-3d7f-46f3-a7a4-ea663237c8af\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.717527 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t2p8\" (UniqueName: \"kubernetes.io/projected/fd55de78-9d5c-46fa-9289-2ab8dbe482ad-kube-api-access-7t2p8\") pod \"fd55de78-9d5c-46fa-9289-2ab8dbe482ad\" (UID: \"fd55de78-9d5c-46fa-9289-2ab8dbe482ad\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.717911 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2tt6\" (UniqueName: \"kubernetes.io/projected/35c01d69-7aa7-49af-99f5-465fafbbc191-kube-api-access-l2tt6\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.719042 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd55de78-9d5c-46fa-9289-2ab8dbe482ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd55de78-9d5c-46fa-9289-2ab8dbe482ad" (UID: "fd55de78-9d5c-46fa-9289-2ab8dbe482ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.719716 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d" (UID: "435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.723838 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd55de78-9d5c-46fa-9289-2ab8dbe482ad-kube-api-access-7t2p8" (OuterVolumeSpecName: "kube-api-access-7t2p8") pod "fd55de78-9d5c-46fa-9289-2ab8dbe482ad" (UID: "fd55de78-9d5c-46fa-9289-2ab8dbe482ad"). InnerVolumeSpecName "kube-api-access-7t2p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.724155 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/479dc11c-3d7f-46f3-a7a4-ea663237c8af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "479dc11c-3d7f-46f3-a7a4-ea663237c8af" (UID: "479dc11c-3d7f-46f3-a7a4-ea663237c8af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.725980 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4804a1ca-dd11-42f7-913d-4b3c1bdb7ead" (UID: "4804a1ca-dd11-42f7-913d-4b3c1bdb7ead"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.727381 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/479dc11c-3d7f-46f3-a7a4-ea663237c8af-kube-api-access-4ms9s" (OuterVolumeSpecName: "kube-api-access-4ms9s") pod "479dc11c-3d7f-46f3-a7a4-ea663237c8af" (UID: "479dc11c-3d7f-46f3-a7a4-ea663237c8af"). InnerVolumeSpecName "kube-api-access-4ms9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.727756 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead-kube-api-access-zrtz8" (OuterVolumeSpecName: "kube-api-access-zrtz8") pod "4804a1ca-dd11-42f7-913d-4b3c1bdb7ead" (UID: "4804a1ca-dd11-42f7-913d-4b3c1bdb7ead"). InnerVolumeSpecName "kube-api-access-zrtz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.728124 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d-kube-api-access-lz4mj" (OuterVolumeSpecName: "kube-api-access-lz4mj") pod "435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d" (UID: "435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d"). InnerVolumeSpecName "kube-api-access-lz4mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.728444 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab09a06a-9cbb-420a-b456-1aa12e0bd0e2" (UID: "ab09a06a-9cbb-420a-b456-1aa12e0bd0e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.731653 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2-kube-api-access-wgss2" (OuterVolumeSpecName: "kube-api-access-wgss2") pod "ab09a06a-9cbb-420a-b456-1aa12e0bd0e2" (UID: "ab09a06a-9cbb-420a-b456-1aa12e0bd0e2"). InnerVolumeSpecName "kube-api-access-wgss2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.819234 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbcc64ec-1a64-403b-be72-d33bb30e5385-operator-scripts\") pod \"cbcc64ec-1a64-403b-be72-d33bb30e5385\" (UID: \"cbcc64ec-1a64-403b-be72-d33bb30e5385\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.819400 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb2d8\" (UniqueName: \"kubernetes.io/projected/cbcc64ec-1a64-403b-be72-d33bb30e5385-kube-api-access-gb2d8\") pod \"cbcc64ec-1a64-403b-be72-d33bb30e5385\" (UID: \"cbcc64ec-1a64-403b-be72-d33bb30e5385\") " Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.820101 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrtz8\" (UniqueName: \"kubernetes.io/projected/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead-kube-api-access-zrtz8\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.820128 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.820142 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz4mj\" (UniqueName: \"kubernetes.io/projected/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d-kube-api-access-lz4mj\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.820152 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.820162 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgss2\" (UniqueName: \"kubernetes.io/projected/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2-kube-api-access-wgss2\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.820190 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd55de78-9d5c-46fa-9289-2ab8dbe482ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.820200 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ms9s\" (UniqueName: \"kubernetes.io/projected/479dc11c-3d7f-46f3-a7a4-ea663237c8af-kube-api-access-4ms9s\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.820210 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479dc11c-3d7f-46f3-a7a4-ea663237c8af-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.820221 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t2p8\" (UniqueName: \"kubernetes.io/projected/fd55de78-9d5c-46fa-9289-2ab8dbe482ad-kube-api-access-7t2p8\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.820215 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbcc64ec-1a64-403b-be72-d33bb30e5385-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbcc64ec-1a64-403b-be72-d33bb30e5385" (UID: "cbcc64ec-1a64-403b-be72-d33bb30e5385"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.820235 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.822370 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbcc64ec-1a64-403b-be72-d33bb30e5385-kube-api-access-gb2d8" (OuterVolumeSpecName: "kube-api-access-gb2d8") pod "cbcc64ec-1a64-403b-be72-d33bb30e5385" (UID: "cbcc64ec-1a64-403b-be72-d33bb30e5385"). InnerVolumeSpecName "kube-api-access-gb2d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.863050 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d451-account-create-mjmt4" event={"ID":"fd55de78-9d5c-46fa-9289-2ab8dbe482ad","Type":"ContainerDied","Data":"f546157e456fbad0b00aa849b7904721de4137601e1eb2de5adcf51c0c5a61e8"} Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.863095 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f546157e456fbad0b00aa849b7904721de4137601e1eb2de5adcf51c0c5a61e8" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.863203 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d451-account-create-mjmt4" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.864785 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5sj86" event={"ID":"35c01d69-7aa7-49af-99f5-465fafbbc191","Type":"ContainerDied","Data":"da0ac6464aa7fcffb4f30bbf86d0ac45931802905c6b45d54b15461cf17ba803"} Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.866578 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da0ac6464aa7fcffb4f30bbf86d0ac45931802905c6b45d54b15461cf17ba803" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.866690 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5sj86" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.871734 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w6572" event={"ID":"479dc11c-3d7f-46f3-a7a4-ea663237c8af","Type":"ContainerDied","Data":"3cd180bfc22ceab0d57321b09ee69451a401d15aa9b7238ff84d7f29f3af579c"} Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.871791 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cd180bfc22ceab0d57321b09ee69451a401d15aa9b7238ff84d7f29f3af579c" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.871883 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w6572" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.877858 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eb6b-account-create-nmg2j" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.878092 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eb6b-account-create-nmg2j" event={"ID":"ab09a06a-9cbb-420a-b456-1aa12e0bd0e2","Type":"ContainerDied","Data":"f4fda8b70200236255b80bbc3b1f78a03c255ef4741fcc207cfc82550e97d43d"} Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.878214 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4fda8b70200236255b80bbc3b1f78a03c255ef4741fcc207cfc82550e97d43d" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.881432 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-v527q" event={"ID":"0de43686-0d8e-4474-befd-ca1bdefb961d","Type":"ContainerDied","Data":"3d43189c9517a3e22e58e10b525a98c01948570d979982c7b5e8aca9cb2ad5ab"} Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.881461 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d43189c9517a3e22e58e10b525a98c01948570d979982c7b5e8aca9cb2ad5ab" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.881579 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-v527q" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.886098 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-849d-account-create-s6d2f" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.886211 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-849d-account-create-s6d2f" event={"ID":"435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d","Type":"ContainerDied","Data":"8d3020e9196c67a03edfae8ab71a088ef67552ccb7f11b0aa33be32dd2484fc0"} Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.886271 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d3020e9196c67a03edfae8ab71a088ef67552ccb7f11b0aa33be32dd2484fc0" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.888997 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c8ca-account-create-s9xf9" event={"ID":"4804a1ca-dd11-42f7-913d-4b3c1bdb7ead","Type":"ContainerDied","Data":"6a2e991040fd7cef4c058f80662155cc23eeb574bad39068585bd1c28afc9507"} Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.889258 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a2e991040fd7cef4c058f80662155cc23eeb574bad39068585bd1c28afc9507" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.889499 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c8ca-account-create-s9xf9" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.896112 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c5mm4" event={"ID":"cbcc64ec-1a64-403b-be72-d33bb30e5385","Type":"ContainerDied","Data":"9c902b9d14280c5eee91977589b8c0001f77973d816b95c9f8a08b56ba1be1cf"} Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.896133 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c5mm4" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.896146 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c902b9d14280c5eee91977589b8c0001f77973d816b95c9f8a08b56ba1be1cf" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.923107 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbcc64ec-1a64-403b-be72-d33bb30e5385-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:16 crc kubenswrapper[4482]: I1125 07:02:16.923133 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb2d8\" (UniqueName: \"kubernetes.io/projected/cbcc64ec-1a64-403b-be72-d33bb30e5385-kube-api-access-gb2d8\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:17 crc kubenswrapper[4482]: I1125 07:02:17.908785 4482 generic.go:334] "Generic (PLEG): container finished" podID="7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7" containerID="c4fcbff8456d8cb88143cb0db543501597742200ee16408696b647d90fb2a55d" exitCode=0 Nov 25 07:02:17 crc kubenswrapper[4482]: I1125 07:02:17.908877 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9kkwr" event={"ID":"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7","Type":"ContainerDied","Data":"c4fcbff8456d8cb88143cb0db543501597742200ee16408696b647d90fb2a55d"} Nov 25 07:02:17 crc kubenswrapper[4482]: I1125 07:02:17.911710 4482 generic.go:334] "Generic (PLEG): container finished" podID="b1469e22-6c31-480a-aad8-81d8c0def8d5" containerID="2a18a681d9dcd6e23327e8be6113ef1971964039c02c3b53e1ad151846fda845" exitCode=0 Nov 25 07:02:17 crc kubenswrapper[4482]: I1125 07:02:17.911751 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" event={"ID":"b1469e22-6c31-480a-aad8-81d8c0def8d5","Type":"ContainerDied","Data":"2a18a681d9dcd6e23327e8be6113ef1971964039c02c3b53e1ad151846fda845"} Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.199049 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.264937 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-ovsdbserver-nb\") pod \"b1469e22-6c31-480a-aad8-81d8c0def8d5\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.265115 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j22dk\" (UniqueName: \"kubernetes.io/projected/b1469e22-6c31-480a-aad8-81d8c0def8d5-kube-api-access-j22dk\") pod \"b1469e22-6c31-480a-aad8-81d8c0def8d5\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.265151 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-dns-svc\") pod \"b1469e22-6c31-480a-aad8-81d8c0def8d5\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.265254 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-config\") pod \"b1469e22-6c31-480a-aad8-81d8c0def8d5\" (UID: \"b1469e22-6c31-480a-aad8-81d8c0def8d5\") " Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.271263 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1469e22-6c31-480a-aad8-81d8c0def8d5-kube-api-access-j22dk" (OuterVolumeSpecName: "kube-api-access-j22dk") pod "b1469e22-6c31-480a-aad8-81d8c0def8d5" (UID: "b1469e22-6c31-480a-aad8-81d8c0def8d5"). InnerVolumeSpecName "kube-api-access-j22dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.284596 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1469e22-6c31-480a-aad8-81d8c0def8d5" (UID: "b1469e22-6c31-480a-aad8-81d8c0def8d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.284757 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-config" (OuterVolumeSpecName: "config") pod "b1469e22-6c31-480a-aad8-81d8c0def8d5" (UID: "b1469e22-6c31-480a-aad8-81d8c0def8d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.287383 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1469e22-6c31-480a-aad8-81d8c0def8d5" (UID: "b1469e22-6c31-480a-aad8-81d8c0def8d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.366919 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.367086 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j22dk\" (UniqueName: \"kubernetes.io/projected/b1469e22-6c31-480a-aad8-81d8c0def8d5-kube-api-access-j22dk\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.367105 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.367117 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1469e22-6c31-480a-aad8-81d8c0def8d5-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.924085 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.924814 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f696d8f45-ldd8l" event={"ID":"b1469e22-6c31-480a-aad8-81d8c0def8d5","Type":"ContainerDied","Data":"12ad2a4c6c888f6f37dd5286a91107f5d73c3ed5cb2c889ce1a831476719f5b4"} Nov 25 07:02:18 crc kubenswrapper[4482]: I1125 07:02:18.924890 4482 scope.go:117] "RemoveContainer" containerID="2a18a681d9dcd6e23327e8be6113ef1971964039c02c3b53e1ad151846fda845" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.013178 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f696d8f45-ldd8l"] Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.018901 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f696d8f45-ldd8l"] Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.286320 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.383662 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-dispersionconf\") pod \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.383811 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-scripts\") pod \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.383879 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-swiftconf\") pod \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.383925 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh9sm\" (UniqueName: \"kubernetes.io/projected/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-kube-api-access-wh9sm\") pod \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.383958 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-ring-data-devices\") pod \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.384076 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-etc-swift\") pod \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.384115 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-combined-ca-bundle\") pod \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\" (UID: \"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7\") " Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.385218 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7" (UID: "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.385426 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7" (UID: "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.390414 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-kube-api-access-wh9sm" (OuterVolumeSpecName: "kube-api-access-wh9sm") pod "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7" (UID: "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7"). InnerVolumeSpecName "kube-api-access-wh9sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.392601 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7" (UID: "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.407972 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-scripts" (OuterVolumeSpecName: "scripts") pod "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7" (UID: "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.408269 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7" (UID: "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.410662 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7" (UID: "7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.487298 4482 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.487347 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.487358 4482 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.487371 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh9sm\" (UniqueName: \"kubernetes.io/projected/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-kube-api-access-wh9sm\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.487385 4482 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.487394 4482 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.487420 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.843535 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1469e22-6c31-480a-aad8-81d8c0def8d5" path="/var/lib/kubelet/pods/b1469e22-6c31-480a-aad8-81d8c0def8d5/volumes" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.933611 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9kkwr" event={"ID":"7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7","Type":"ContainerDied","Data":"c71d7e10569b7957c2f2e396d61406ad9073d0a8a2ee8ce807cfa7a5845c89e2"} Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.933647 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9kkwr" Nov 25 07:02:19 crc kubenswrapper[4482]: I1125 07:02:19.933657 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c71d7e10569b7957c2f2e396d61406ad9073d0a8a2ee8ce807cfa7a5845c89e2" Nov 25 07:02:20 crc kubenswrapper[4482]: I1125 07:02:20.210711 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c4pcb" podUID="cb9d3e0a-aeb5-4221-a617-71a724c676ed" containerName="ovn-controller" probeResult="failure" output=< Nov 25 07:02:20 crc kubenswrapper[4482]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 07:02:20 crc kubenswrapper[4482]: > Nov 25 07:02:22 crc kubenswrapper[4482]: I1125 07:02:22.961247 4482 generic.go:334] "Generic (PLEG): container finished" podID="e224ce8a-f213-4745-8cc0-7d1351065d13" containerID="5afb159a6ec1a765071d3f97067f1b997c8bec69368179cff9ad4683d9ca01f6" exitCode=0 Nov 25 07:02:22 crc kubenswrapper[4482]: I1125 07:02:22.961413 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c8776475-qd28b" event={"ID":"e224ce8a-f213-4745-8cc0-7d1351065d13","Type":"ContainerDied","Data":"5afb159a6ec1a765071d3f97067f1b997c8bec69368179cff9ad4683d9ca01f6"} Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.251589 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.360431 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-dns-svc\") pod \"e224ce8a-f213-4745-8cc0-7d1351065d13\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.360475 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-ovsdbserver-nb\") pod \"e224ce8a-f213-4745-8cc0-7d1351065d13\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.360594 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpv6n\" (UniqueName: \"kubernetes.io/projected/e224ce8a-f213-4745-8cc0-7d1351065d13-kube-api-access-gpv6n\") pod \"e224ce8a-f213-4745-8cc0-7d1351065d13\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.360694 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-config\") pod \"e224ce8a-f213-4745-8cc0-7d1351065d13\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.360768 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-ovsdbserver-sb\") pod \"e224ce8a-f213-4745-8cc0-7d1351065d13\" (UID: \"e224ce8a-f213-4745-8cc0-7d1351065d13\") " Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.370046 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e224ce8a-f213-4745-8cc0-7d1351065d13-kube-api-access-gpv6n" (OuterVolumeSpecName: "kube-api-access-gpv6n") pod "e224ce8a-f213-4745-8cc0-7d1351065d13" (UID: "e224ce8a-f213-4745-8cc0-7d1351065d13"). InnerVolumeSpecName "kube-api-access-gpv6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.383004 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-config" (OuterVolumeSpecName: "config") pod "e224ce8a-f213-4745-8cc0-7d1351065d13" (UID: "e224ce8a-f213-4745-8cc0-7d1351065d13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.383012 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e224ce8a-f213-4745-8cc0-7d1351065d13" (UID: "e224ce8a-f213-4745-8cc0-7d1351065d13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.383804 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e224ce8a-f213-4745-8cc0-7d1351065d13" (UID: "e224ce8a-f213-4745-8cc0-7d1351065d13"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.384796 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e224ce8a-f213-4745-8cc0-7d1351065d13" (UID: "e224ce8a-f213-4745-8cc0-7d1351065d13"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.464752 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.464791 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.464801 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.464813 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpv6n\" (UniqueName: \"kubernetes.io/projected/e224ce8a-f213-4745-8cc0-7d1351065d13-kube-api-access-gpv6n\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.464824 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e224ce8a-f213-4745-8cc0-7d1351065d13-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.971133 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c8776475-qd28b" event={"ID":"e224ce8a-f213-4745-8cc0-7d1351065d13","Type":"ContainerDied","Data":"f37dca9b30685ec0f1fcd93d6dc3bf4f370592d5adbc6b6b0af2ad20084e0284"} Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.971479 4482 scope.go:117] "RemoveContainer" containerID="5afb159a6ec1a765071d3f97067f1b997c8bec69368179cff9ad4683d9ca01f6" Nov 25 07:02:23 crc kubenswrapper[4482]: I1125 07:02:23.971213 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c8776475-qd28b" Nov 25 07:02:24 crc kubenswrapper[4482]: I1125 07:02:24.011676 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c8776475-qd28b"] Nov 25 07:02:24 crc kubenswrapper[4482]: I1125 07:02:24.016683 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76c8776475-qd28b"] Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.220868 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-c4pcb" podUID="cb9d3e0a-aeb5-4221-a617-71a724c676ed" containerName="ovn-controller" probeResult="failure" output=< Nov 25 07:02:25 crc kubenswrapper[4482]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 07:02:25 crc kubenswrapper[4482]: > Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.224068 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.226320 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pgdql" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.433454 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c4pcb-config-z8r72"] Nov 25 07:02:25 crc kubenswrapper[4482]: E1125 07:02:25.434714 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de43686-0d8e-4474-befd-ca1bdefb961d" containerName="mariadb-database-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.434758 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de43686-0d8e-4474-befd-ca1bdefb961d" containerName="mariadb-database-create" Nov 25 07:02:25 crc kubenswrapper[4482]: E1125 07:02:25.434794 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbcc64ec-1a64-403b-be72-d33bb30e5385" containerName="mariadb-database-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.434800 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbcc64ec-1a64-403b-be72-d33bb30e5385" containerName="mariadb-database-create" Nov 25 07:02:25 crc kubenswrapper[4482]: E1125 07:02:25.434809 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d" containerName="mariadb-account-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.434815 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d" containerName="mariadb-account-create" Nov 25 07:02:25 crc kubenswrapper[4482]: E1125 07:02:25.434825 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd55de78-9d5c-46fa-9289-2ab8dbe482ad" containerName="mariadb-account-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.434841 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd55de78-9d5c-46fa-9289-2ab8dbe482ad" containerName="mariadb-account-create" Nov 25 07:02:25 crc kubenswrapper[4482]: E1125 07:02:25.434855 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7" containerName="swift-ring-rebalance" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.434861 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7" containerName="swift-ring-rebalance" Nov 25 07:02:25 crc kubenswrapper[4482]: E1125 07:02:25.434883 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4804a1ca-dd11-42f7-913d-4b3c1bdb7ead" containerName="mariadb-account-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.434888 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="4804a1ca-dd11-42f7-913d-4b3c1bdb7ead" containerName="mariadb-account-create" Nov 25 07:02:25 crc kubenswrapper[4482]: E1125 07:02:25.434899 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab09a06a-9cbb-420a-b456-1aa12e0bd0e2" containerName="mariadb-account-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.434907 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab09a06a-9cbb-420a-b456-1aa12e0bd0e2" containerName="mariadb-account-create" Nov 25 07:02:25 crc kubenswrapper[4482]: E1125 07:02:25.434915 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e224ce8a-f213-4745-8cc0-7d1351065d13" containerName="init" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.434920 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="e224ce8a-f213-4745-8cc0-7d1351065d13" containerName="init" Nov 25 07:02:25 crc kubenswrapper[4482]: E1125 07:02:25.434932 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="479dc11c-3d7f-46f3-a7a4-ea663237c8af" containerName="mariadb-database-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.434938 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="479dc11c-3d7f-46f3-a7a4-ea663237c8af" containerName="mariadb-database-create" Nov 25 07:02:25 crc kubenswrapper[4482]: E1125 07:02:25.434946 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c01d69-7aa7-49af-99f5-465fafbbc191" containerName="mariadb-database-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.434951 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c01d69-7aa7-49af-99f5-465fafbbc191" containerName="mariadb-database-create" Nov 25 07:02:25 crc kubenswrapper[4482]: E1125 07:02:25.434962 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1469e22-6c31-480a-aad8-81d8c0def8d5" containerName="init" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.434969 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1469e22-6c31-480a-aad8-81d8c0def8d5" containerName="init" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.435244 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d" containerName="mariadb-account-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.435261 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="e224ce8a-f213-4745-8cc0-7d1351065d13" containerName="init" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.435270 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1469e22-6c31-480a-aad8-81d8c0def8d5" containerName="init" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.435278 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd55de78-9d5c-46fa-9289-2ab8dbe482ad" containerName="mariadb-account-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.435285 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="479dc11c-3d7f-46f3-a7a4-ea663237c8af" containerName="mariadb-database-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.435291 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="4804a1ca-dd11-42f7-913d-4b3c1bdb7ead" containerName="mariadb-account-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.435300 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c01d69-7aa7-49af-99f5-465fafbbc191" containerName="mariadb-database-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.435311 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de43686-0d8e-4474-befd-ca1bdefb961d" containerName="mariadb-database-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.435317 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbcc64ec-1a64-403b-be72-d33bb30e5385" containerName="mariadb-database-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.435325 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab09a06a-9cbb-420a-b456-1aa12e0bd0e2" containerName="mariadb-account-create" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.435337 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b01a0a7-35fb-425e-a5d0-4ef1c95d87c7" containerName="swift-ring-rebalance" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.436090 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.440056 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.446284 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c4pcb-config-z8r72"] Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.495419 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a905e8c8-5c9c-4988-8482-20b6dc49dfba-scripts\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.495485 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-run-ovn\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.495590 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-run\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.495814 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4qpp\" (UniqueName: \"kubernetes.io/projected/a905e8c8-5c9c-4988-8482-20b6dc49dfba-kube-api-access-x4qpp\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.495862 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a905e8c8-5c9c-4988-8482-20b6dc49dfba-additional-scripts\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.495993 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-log-ovn\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.597928 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4qpp\" (UniqueName: \"kubernetes.io/projected/a905e8c8-5c9c-4988-8482-20b6dc49dfba-kube-api-access-x4qpp\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.597987 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a905e8c8-5c9c-4988-8482-20b6dc49dfba-additional-scripts\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.598090 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-log-ovn\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.598126 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a905e8c8-5c9c-4988-8482-20b6dc49dfba-scripts\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.598160 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-run-ovn\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.598239 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-run\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.598639 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-run\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.598653 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-log-ovn\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.599515 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a905e8c8-5c9c-4988-8482-20b6dc49dfba-additional-scripts\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.599562 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-run-ovn\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.600664 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a905e8c8-5c9c-4988-8482-20b6dc49dfba-scripts\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.619080 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4qpp\" (UniqueName: \"kubernetes.io/projected/a905e8c8-5c9c-4988-8482-20b6dc49dfba-kube-api-access-x4qpp\") pod \"ovn-controller-c4pcb-config-z8r72\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.671481 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.751631 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:25 crc kubenswrapper[4482]: I1125 07:02:25.850073 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e224ce8a-f213-4745-8cc0-7d1351065d13" path="/var/lib/kubelet/pods/e224ce8a-f213-4745-8cc0-7d1351065d13/volumes" Nov 25 07:02:26 crc kubenswrapper[4482]: I1125 07:02:26.193289 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c4pcb-config-z8r72"] Nov 25 07:02:26 crc kubenswrapper[4482]: W1125 07:02:26.193325 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda905e8c8_5c9c_4988_8482_20b6dc49dfba.slice/crio-7c3654ed3cfa3698ed899ba63059e1e2709fc7936bc98bd7972b01706e1aeef2 WatchSource:0}: Error finding container 7c3654ed3cfa3698ed899ba63059e1e2709fc7936bc98bd7972b01706e1aeef2: Status 404 returned error can't find the container with id 7c3654ed3cfa3698ed899ba63059e1e2709fc7936bc98bd7972b01706e1aeef2 Nov 25 07:02:27 crc kubenswrapper[4482]: I1125 07:02:27.016750 4482 generic.go:334] "Generic (PLEG): container finished" podID="a905e8c8-5c9c-4988-8482-20b6dc49dfba" containerID="439205d16de18c8a65ebb873a29d16d2b37809ab701037fc2a36954b008972d6" exitCode=0 Nov 25 07:02:27 crc kubenswrapper[4482]: I1125 07:02:27.016810 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4pcb-config-z8r72" event={"ID":"a905e8c8-5c9c-4988-8482-20b6dc49dfba","Type":"ContainerDied","Data":"439205d16de18c8a65ebb873a29d16d2b37809ab701037fc2a36954b008972d6"} Nov 25 07:02:27 crc kubenswrapper[4482]: I1125 07:02:27.017394 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4pcb-config-z8r72" event={"ID":"a905e8c8-5c9c-4988-8482-20b6dc49dfba","Type":"ContainerStarted","Data":"7c3654ed3cfa3698ed899ba63059e1e2709fc7936bc98bd7972b01706e1aeef2"} Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.284755 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.352521 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a905e8c8-5c9c-4988-8482-20b6dc49dfba-scripts\") pod \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.352674 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-log-ovn\") pod \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.352753 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a905e8c8-5c9c-4988-8482-20b6dc49dfba" (UID: "a905e8c8-5c9c-4988-8482-20b6dc49dfba"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.352782 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-run\") pod \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.352871 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a905e8c8-5c9c-4988-8482-20b6dc49dfba-additional-scripts\") pod \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.352864 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-run" (OuterVolumeSpecName: "var-run") pod "a905e8c8-5c9c-4988-8482-20b6dc49dfba" (UID: "a905e8c8-5c9c-4988-8482-20b6dc49dfba"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.353382 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a905e8c8-5c9c-4988-8482-20b6dc49dfba-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a905e8c8-5c9c-4988-8482-20b6dc49dfba" (UID: "a905e8c8-5c9c-4988-8482-20b6dc49dfba"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.353451 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4qpp\" (UniqueName: \"kubernetes.io/projected/a905e8c8-5c9c-4988-8482-20b6dc49dfba-kube-api-access-x4qpp\") pod \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.353567 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a905e8c8-5c9c-4988-8482-20b6dc49dfba-scripts" (OuterVolumeSpecName: "scripts") pod "a905e8c8-5c9c-4988-8482-20b6dc49dfba" (UID: "a905e8c8-5c9c-4988-8482-20b6dc49dfba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.354351 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-run-ovn\") pod \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\" (UID: \"a905e8c8-5c9c-4988-8482-20b6dc49dfba\") " Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.354431 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a905e8c8-5c9c-4988-8482-20b6dc49dfba" (UID: "a905e8c8-5c9c-4988-8482-20b6dc49dfba"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.355355 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a905e8c8-5c9c-4988-8482-20b6dc49dfba-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.355375 4482 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.355390 4482 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.355401 4482 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a905e8c8-5c9c-4988-8482-20b6dc49dfba-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.355411 4482 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a905e8c8-5c9c-4988-8482-20b6dc49dfba-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.361358 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a905e8c8-5c9c-4988-8482-20b6dc49dfba-kube-api-access-x4qpp" (OuterVolumeSpecName: "kube-api-access-x4qpp") pod "a905e8c8-5c9c-4988-8482-20b6dc49dfba" (UID: "a905e8c8-5c9c-4988-8482-20b6dc49dfba"). InnerVolumeSpecName "kube-api-access-x4qpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:28 crc kubenswrapper[4482]: I1125 07:02:28.457899 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4qpp\" (UniqueName: \"kubernetes.io/projected/a905e8c8-5c9c-4988-8482-20b6dc49dfba-kube-api-access-x4qpp\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.034389 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4pcb-config-z8r72" event={"ID":"a905e8c8-5c9c-4988-8482-20b6dc49dfba","Type":"ContainerDied","Data":"7c3654ed3cfa3698ed899ba63059e1e2709fc7936bc98bd7972b01706e1aeef2"} Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.034677 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c3654ed3cfa3698ed899ba63059e1e2709fc7936bc98bd7972b01706e1aeef2" Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.034476 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4pcb-config-z8r72" Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.271538 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.277583 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/21d6404f-f801-4230-af65-d110706155c6-etc-swift\") pod \"swift-storage-0\" (UID: \"21d6404f-f801-4230-af65-d110706155c6\") " pod="openstack/swift-storage-0" Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.373709 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-c4pcb-config-z8r72"] Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.377570 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-c4pcb-config-z8r72"] Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.377742 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.428100 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-c4pcb-config-fr5fs"] Nov 25 07:02:29 crc kubenswrapper[4482]: E1125 07:02:29.429457 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a905e8c8-5c9c-4988-8482-20b6dc49dfba" containerName="ovn-config" Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.429489 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="a905e8c8-5c9c-4988-8482-20b6dc49dfba" containerName="ovn-config" Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.429887 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="a905e8c8-5c9c-4988-8482-20b6dc49dfba" containerName="ovn-config" Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.444160 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.454714 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 07:02:29 crc kubenswrapper[4482]: I1125 07:02:29.469378 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c4pcb-config-fr5fs"] Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.581110 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-log-ovn\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.581195 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6k7q\" (UniqueName: \"kubernetes.io/projected/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-kube-api-access-m6k7q\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.581244 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-run-ovn\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.581336 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-scripts\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.581415 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-run\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.581771 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-additional-scripts\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.684695 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-additional-scripts\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.684877 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-log-ovn\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.684950 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6k7q\" (UniqueName: \"kubernetes.io/projected/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-kube-api-access-m6k7q\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.684996 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-run-ovn\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.685110 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-scripts\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.685197 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-run\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.685545 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-run\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.685966 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-log-ovn\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.686018 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-run-ovn\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.686320 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-additional-scripts\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.687923 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-scripts\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.705622 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6k7q\" (UniqueName: \"kubernetes.io/projected/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-kube-api-access-m6k7q\") pod \"ovn-controller-c4pcb-config-fr5fs\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.784453 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:29.840515 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a905e8c8-5c9c-4988-8482-20b6dc49dfba" path="/var/lib/kubelet/pods/a905e8c8-5c9c-4988-8482-20b6dc49dfba/volumes" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:30.222682 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-c4pcb" Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:30.370048 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-c4pcb-config-fr5fs"] Nov 25 07:02:30 crc kubenswrapper[4482]: I1125 07:02:30.490547 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 07:02:31 crc kubenswrapper[4482]: I1125 07:02:31.077307 4482 generic.go:334] "Generic (PLEG): container finished" podID="14e4d2b9-dbc6-4786-bc7a-66fee23d5d80" containerID="ac5c5842cbfbf2124176f2c6e6276d798b5f1f4b00838ac3bb8ab115496f661b" exitCode=0 Nov 25 07:02:31 crc kubenswrapper[4482]: I1125 07:02:31.077359 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4pcb-config-fr5fs" event={"ID":"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80","Type":"ContainerDied","Data":"ac5c5842cbfbf2124176f2c6e6276d798b5f1f4b00838ac3bb8ab115496f661b"} Nov 25 07:02:31 crc kubenswrapper[4482]: I1125 07:02:31.077739 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4pcb-config-fr5fs" event={"ID":"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80","Type":"ContainerStarted","Data":"4f35129adc7f075b29fa306537d8544095353bdbb20e2ee501c3bc6182a34995"} Nov 25 07:02:31 crc kubenswrapper[4482]: I1125 07:02:31.079377 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"ef22543ba08ec9a31ab060c23ad3d74b66931623dc4c0066d8496b5fd40901ba"} Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.343116 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.461750 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-run\") pod \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.461855 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-log-ovn\") pod \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.461883 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-run-ovn\") pod \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.461983 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-scripts\") pod \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.462027 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6k7q\" (UniqueName: \"kubernetes.io/projected/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-kube-api-access-m6k7q\") pod \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.462223 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "14e4d2b9-dbc6-4786-bc7a-66fee23d5d80" (UID: "14e4d2b9-dbc6-4786-bc7a-66fee23d5d80"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.462287 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-additional-scripts\") pod \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\" (UID: \"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80\") " Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.462323 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "14e4d2b9-dbc6-4786-bc7a-66fee23d5d80" (UID: "14e4d2b9-dbc6-4786-bc7a-66fee23d5d80"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.462357 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-run" (OuterVolumeSpecName: "var-run") pod "14e4d2b9-dbc6-4786-bc7a-66fee23d5d80" (UID: "14e4d2b9-dbc6-4786-bc7a-66fee23d5d80"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.463019 4482 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.463033 4482 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.463043 4482 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.463410 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "14e4d2b9-dbc6-4786-bc7a-66fee23d5d80" (UID: "14e4d2b9-dbc6-4786-bc7a-66fee23d5d80"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.463660 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-scripts" (OuterVolumeSpecName: "scripts") pod "14e4d2b9-dbc6-4786-bc7a-66fee23d5d80" (UID: "14e4d2b9-dbc6-4786-bc7a-66fee23d5d80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.469129 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-kube-api-access-m6k7q" (OuterVolumeSpecName: "kube-api-access-m6k7q") pod "14e4d2b9-dbc6-4786-bc7a-66fee23d5d80" (UID: "14e4d2b9-dbc6-4786-bc7a-66fee23d5d80"). InnerVolumeSpecName "kube-api-access-m6k7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.564739 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.564773 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6k7q\" (UniqueName: \"kubernetes.io/projected/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-kube-api-access-m6k7q\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:32 crc kubenswrapper[4482]: I1125 07:02:32.564785 4482 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:33 crc kubenswrapper[4482]: I1125 07:02:33.097315 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-c4pcb-config-fr5fs" event={"ID":"14e4d2b9-dbc6-4786-bc7a-66fee23d5d80","Type":"ContainerDied","Data":"4f35129adc7f075b29fa306537d8544095353bdbb20e2ee501c3bc6182a34995"} Nov 25 07:02:33 crc kubenswrapper[4482]: I1125 07:02:33.097350 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-c4pcb-config-fr5fs" Nov 25 07:02:33 crc kubenswrapper[4482]: I1125 07:02:33.097361 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f35129adc7f075b29fa306537d8544095353bdbb20e2ee501c3bc6182a34995" Nov 25 07:02:33 crc kubenswrapper[4482]: I1125 07:02:33.413829 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-c4pcb-config-fr5fs"] Nov 25 07:02:33 crc kubenswrapper[4482]: I1125 07:02:33.424539 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-c4pcb-config-fr5fs"] Nov 25 07:02:33 crc kubenswrapper[4482]: I1125 07:02:33.841761 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e4d2b9-dbc6-4786-bc7a-66fee23d5d80" path="/var/lib/kubelet/pods/14e4d2b9-dbc6-4786-bc7a-66fee23d5d80/volumes" Nov 25 07:02:34 crc kubenswrapper[4482]: I1125 07:02:34.110810 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"9fa9709f89da47e29f34869ea4cdc826da6a4ac8957799fd92e7c399302d962b"} Nov 25 07:02:36 crc kubenswrapper[4482]: I1125 07:02:36.133642 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"b003bd62d753194e3db182dab1be6744ed05c06994de20b502f57c81fa97040c"} Nov 25 07:02:36 crc kubenswrapper[4482]: I1125 07:02:36.134294 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"9a6a8409d7109e3d8e04960436a45860b834e60fb716ad4e6a84c64d0deeb4c0"} Nov 25 07:02:36 crc kubenswrapper[4482]: I1125 07:02:36.134311 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"b719a2b49563871109d27add13674d0066566c8ba698fdf1a2229d8382e6cd9b"} Nov 25 07:02:36 crc kubenswrapper[4482]: I1125 07:02:36.136053 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nlsmj" event={"ID":"a3d08539-2898-4d05-af16-1dd533f1720d","Type":"ContainerStarted","Data":"bf9624616701ab1c4e4f88c5ff72594fc0c04a3b485b12aab244e8d50c4d9407"} Nov 25 07:02:38 crc kubenswrapper[4482]: I1125 07:02:38.151248 4482 generic.go:334] "Generic (PLEG): container finished" podID="a3d08539-2898-4d05-af16-1dd533f1720d" containerID="bf9624616701ab1c4e4f88c5ff72594fc0c04a3b485b12aab244e8d50c4d9407" exitCode=0 Nov 25 07:02:38 crc kubenswrapper[4482]: I1125 07:02:38.151322 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nlsmj" event={"ID":"a3d08539-2898-4d05-af16-1dd533f1720d","Type":"ContainerDied","Data":"bf9624616701ab1c4e4f88c5ff72594fc0c04a3b485b12aab244e8d50c4d9407"} Nov 25 07:02:38 crc kubenswrapper[4482]: I1125 07:02:38.159947 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"0482b9d689f8934ea8ec0aa952ec9cb520b0aa477895af18cef4c471e26e8b33"} Nov 25 07:02:38 crc kubenswrapper[4482]: I1125 07:02:38.159983 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"ca5b949027822f51d6384e577fd6885cb14e75b8ddebe57d7c3540939738713a"} Nov 25 07:02:38 crc kubenswrapper[4482]: I1125 07:02:38.159994 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"de2772b17a4c1bfaf59681fee2973e80040318626fa632b3aaa7e16434aa696b"} Nov 25 07:02:38 crc kubenswrapper[4482]: I1125 07:02:38.160001 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"c78611662ad785c7dab6980a07a2c211ef1f1b71893252b0cf970a3377e3ff95"} Nov 25 07:02:39 crc kubenswrapper[4482]: I1125 07:02:39.117495 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:02:39 crc kubenswrapper[4482]: I1125 07:02:39.117586 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:02:39 crc kubenswrapper[4482]: I1125 07:02:39.436967 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:39 crc kubenswrapper[4482]: I1125 07:02:39.616716 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bl2g\" (UniqueName: \"kubernetes.io/projected/a3d08539-2898-4d05-af16-1dd533f1720d-kube-api-access-8bl2g\") pod \"a3d08539-2898-4d05-af16-1dd533f1720d\" (UID: \"a3d08539-2898-4d05-af16-1dd533f1720d\") " Nov 25 07:02:39 crc kubenswrapper[4482]: I1125 07:02:39.616776 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d08539-2898-4d05-af16-1dd533f1720d-combined-ca-bundle\") pod \"a3d08539-2898-4d05-af16-1dd533f1720d\" (UID: \"a3d08539-2898-4d05-af16-1dd533f1720d\") " Nov 25 07:02:39 crc kubenswrapper[4482]: I1125 07:02:39.616973 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d08539-2898-4d05-af16-1dd533f1720d-config-data\") pod \"a3d08539-2898-4d05-af16-1dd533f1720d\" (UID: \"a3d08539-2898-4d05-af16-1dd533f1720d\") " Nov 25 07:02:39 crc kubenswrapper[4482]: I1125 07:02:39.623852 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d08539-2898-4d05-af16-1dd533f1720d-kube-api-access-8bl2g" (OuterVolumeSpecName: "kube-api-access-8bl2g") pod "a3d08539-2898-4d05-af16-1dd533f1720d" (UID: "a3d08539-2898-4d05-af16-1dd533f1720d"). InnerVolumeSpecName "kube-api-access-8bl2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:39 crc kubenswrapper[4482]: I1125 07:02:39.641230 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d08539-2898-4d05-af16-1dd533f1720d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3d08539-2898-4d05-af16-1dd533f1720d" (UID: "a3d08539-2898-4d05-af16-1dd533f1720d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:39 crc kubenswrapper[4482]: I1125 07:02:39.655088 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d08539-2898-4d05-af16-1dd533f1720d-config-data" (OuterVolumeSpecName: "config-data") pod "a3d08539-2898-4d05-af16-1dd533f1720d" (UID: "a3d08539-2898-4d05-af16-1dd533f1720d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:39 crc kubenswrapper[4482]: I1125 07:02:39.720146 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bl2g\" (UniqueName: \"kubernetes.io/projected/a3d08539-2898-4d05-af16-1dd533f1720d-kube-api-access-8bl2g\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:39 crc kubenswrapper[4482]: I1125 07:02:39.720200 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d08539-2898-4d05-af16-1dd533f1720d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:39 crc kubenswrapper[4482]: I1125 07:02:39.720214 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d08539-2898-4d05-af16-1dd533f1720d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.189961 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"8d13148cd384fa6f790bc41f3c882a588aca1c1b8e44c2d60dff0a49f6ee32a7"} Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.190355 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"fef537415ef0c7efb1120638d2a13e35b697bcaf357589d6f151296117e42de8"} Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.194845 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nlsmj" event={"ID":"a3d08539-2898-4d05-af16-1dd533f1720d","Type":"ContainerDied","Data":"018a61b90dd74b8d4a66aee9bc1a45a9b2284c9c94fa8e180f66777f9260f6d7"} Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.194894 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="018a61b90dd74b8d4a66aee9bc1a45a9b2284c9c94fa8e180f66777f9260f6d7" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.194934 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nlsmj" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.733787 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66d8846475-ghcrk"] Nov 25 07:02:40 crc kubenswrapper[4482]: E1125 07:02:40.737696 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e4d2b9-dbc6-4786-bc7a-66fee23d5d80" containerName="ovn-config" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.737790 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e4d2b9-dbc6-4786-bc7a-66fee23d5d80" containerName="ovn-config" Nov 25 07:02:40 crc kubenswrapper[4482]: E1125 07:02:40.737861 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d08539-2898-4d05-af16-1dd533f1720d" containerName="keystone-db-sync" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.737909 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d08539-2898-4d05-af16-1dd533f1720d" containerName="keystone-db-sync" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.738132 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e4d2b9-dbc6-4786-bc7a-66fee23d5d80" containerName="ovn-config" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.738217 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d08539-2898-4d05-af16-1dd533f1720d" containerName="keystone-db-sync" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.744636 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.760307 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-blhln"] Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.769316 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.776688 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.776850 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nl4pz" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.776881 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.777120 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.777297 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.779486 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66d8846475-ghcrk"] Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.820726 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-blhln"] Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.953680 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fcxt\" (UniqueName: \"kubernetes.io/projected/55ccee6e-6831-4f26-b3f3-5c6de363adb8-kube-api-access-4fcxt\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.953810 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8dm\" (UniqueName: \"kubernetes.io/projected/be3949da-bc32-48f7-8330-031cc2de23e4-kube-api-access-xs8dm\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.953845 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-config-data\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.953900 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-fernet-keys\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.953935 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-ovsdbserver-sb\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.953985 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-scripts\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.954006 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-credential-keys\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.954042 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-combined-ca-bundle\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.954094 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-ovsdbserver-nb\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.954151 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-config\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.954219 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-dns-svc\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.975992 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-v2dqt"] Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.980372 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-v2dqt" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.997647 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 25 07:02:40 crc kubenswrapper[4482]: I1125 07:02:40.997933 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-ngzzq" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.006764 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76cc5bdc65-wzwtb"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.008244 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.025560 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.025831 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.025974 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-7nnr6" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.026099 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.027367 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-v2dqt"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.056566 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs8dm\" (UniqueName: \"kubernetes.io/projected/be3949da-bc32-48f7-8330-031cc2de23e4-kube-api-access-xs8dm\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.056633 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-config-data\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.056671 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-fernet-keys\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.056717 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-ovsdbserver-sb\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.056751 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-scripts\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.056772 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-credential-keys\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.056807 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-combined-ca-bundle\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.056847 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-ovsdbserver-nb\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.056894 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-config\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.057037 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-dns-svc\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.057206 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fcxt\" (UniqueName: \"kubernetes.io/projected/55ccee6e-6831-4f26-b3f3-5c6de363adb8-kube-api-access-4fcxt\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.058652 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-ovsdbserver-sb\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.059345 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-ovsdbserver-nb\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.064209 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-dns-svc\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.077446 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-config\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.078679 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-scripts\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.079294 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-config-data\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.085038 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-combined-ca-bundle\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.085647 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs8dm\" (UniqueName: \"kubernetes.io/projected/be3949da-bc32-48f7-8330-031cc2de23e4-kube-api-access-xs8dm\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.088658 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-credential-keys\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.089132 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-fernet-keys\") pod \"keystone-bootstrap-blhln\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.092244 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76cc5bdc65-wzwtb"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.093516 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fcxt\" (UniqueName: \"kubernetes.io/projected/55ccee6e-6831-4f26-b3f3-5c6de363adb8-kube-api-access-4fcxt\") pod \"dnsmasq-dns-66d8846475-ghcrk\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.160696 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-scripts\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.161113 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e50321d-a59a-4d39-a485-4299ced13bdc-combined-ca-bundle\") pod \"heat-db-sync-v2dqt\" (UID: \"3e50321d-a59a-4d39-a485-4299ced13bdc\") " pod="openstack/heat-db-sync-v2dqt" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.161246 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-horizon-secret-key\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.161326 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-config-data\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.161447 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjzsv\" (UniqueName: \"kubernetes.io/projected/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-kube-api-access-wjzsv\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.161540 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcz5\" (UniqueName: \"kubernetes.io/projected/3e50321d-a59a-4d39-a485-4299ced13bdc-kube-api-access-6jcz5\") pod \"heat-db-sync-v2dqt\" (UID: \"3e50321d-a59a-4d39-a485-4299ced13bdc\") " pod="openstack/heat-db-sync-v2dqt" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.161668 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e50321d-a59a-4d39-a485-4299ced13bdc-config-data\") pod \"heat-db-sync-v2dqt\" (UID: \"3e50321d-a59a-4d39-a485-4299ced13bdc\") " pod="openstack/heat-db-sync-v2dqt" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.161788 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-logs\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.218222 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-7b7rr"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.219432 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.228726 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kdhzt" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.229311 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.229530 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.264226 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e50321d-a59a-4d39-a485-4299ced13bdc-combined-ca-bundle\") pod \"heat-db-sync-v2dqt\" (UID: \"3e50321d-a59a-4d39-a485-4299ced13bdc\") " pod="openstack/heat-db-sync-v2dqt" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.264268 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-horizon-secret-key\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.264287 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-config-data\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.264326 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjzsv\" (UniqueName: \"kubernetes.io/projected/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-kube-api-access-wjzsv\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.264346 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcz5\" (UniqueName: \"kubernetes.io/projected/3e50321d-a59a-4d39-a485-4299ced13bdc-kube-api-access-6jcz5\") pod \"heat-db-sync-v2dqt\" (UID: \"3e50321d-a59a-4d39-a485-4299ced13bdc\") " pod="openstack/heat-db-sync-v2dqt" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.264387 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e50321d-a59a-4d39-a485-4299ced13bdc-config-data\") pod \"heat-db-sync-v2dqt\" (UID: \"3e50321d-a59a-4d39-a485-4299ced13bdc\") " pod="openstack/heat-db-sync-v2dqt" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.264430 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-logs\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.264488 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-scripts\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.265081 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-scripts\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.267699 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-ggvxs"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.268835 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.270648 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-logs\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.274753 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-horizon-secret-key\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.279101 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e50321d-a59a-4d39-a485-4299ced13bdc-config-data\") pod \"heat-db-sync-v2dqt\" (UID: \"3e50321d-a59a-4d39-a485-4299ced13bdc\") " pod="openstack/heat-db-sync-v2dqt" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.291350 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.291581 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fv2fv" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.291717 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.302886 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-config-data\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.308830 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e50321d-a59a-4d39-a485-4299ced13bdc-combined-ca-bundle\") pod \"heat-db-sync-v2dqt\" (UID: \"3e50321d-a59a-4d39-a485-4299ced13bdc\") " pod="openstack/heat-db-sync-v2dqt" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.309311 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcz5\" (UniqueName: \"kubernetes.io/projected/3e50321d-a59a-4d39-a485-4299ced13bdc-kube-api-access-6jcz5\") pod \"heat-db-sync-v2dqt\" (UID: \"3e50321d-a59a-4d39-a485-4299ced13bdc\") " pod="openstack/heat-db-sync-v2dqt" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.313688 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7b7rr"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.335443 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"f91c9b91d2006b5da48e4dfc6ff248ce3c0daec50d24871a3818454544e2ef56"} Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.337701 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjzsv\" (UniqueName: \"kubernetes.io/projected/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-kube-api-access-wjzsv\") pod \"horizon-76cc5bdc65-wzwtb\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.345718 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.355324 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ggvxs"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.375231 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.384329 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/573eba52-c038-42e0-89a7-4791962151a4-config\") pod \"neutron-db-sync-7b7rr\" (UID: \"573eba52-c038-42e0-89a7-4791962151a4\") " pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.384371 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-combined-ca-bundle\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.384422 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573eba52-c038-42e0-89a7-4791962151a4-combined-ca-bundle\") pod \"neutron-db-sync-7b7rr\" (UID: \"573eba52-c038-42e0-89a7-4791962151a4\") " pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.384442 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6x86\" (UniqueName: \"kubernetes.io/projected/6f1385f6-5258-4372-a20a-30a7229ec2e8-kube-api-access-v6x86\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.384459 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-scripts\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.384491 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx7qb\" (UniqueName: \"kubernetes.io/projected/573eba52-c038-42e0-89a7-4791962151a4-kube-api-access-vx7qb\") pod \"neutron-db-sync-7b7rr\" (UID: \"573eba52-c038-42e0-89a7-4791962151a4\") " pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.384530 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-db-sync-config-data\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.384566 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f1385f6-5258-4372-a20a-30a7229ec2e8-etc-machine-id\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.384586 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-config-data\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.388836 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.406013 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.408151 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.415613 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.416215 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.453083 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qm4lm"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.454085 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.460269 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.460518 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7mvql" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.480209 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.487440 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/573eba52-c038-42e0-89a7-4791962151a4-config\") pod \"neutron-db-sync-7b7rr\" (UID: \"573eba52-c038-42e0-89a7-4791962151a4\") " pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.487476 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-combined-ca-bundle\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.487539 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573eba52-c038-42e0-89a7-4791962151a4-combined-ca-bundle\") pod \"neutron-db-sync-7b7rr\" (UID: \"573eba52-c038-42e0-89a7-4791962151a4\") " pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.487561 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6x86\" (UniqueName: \"kubernetes.io/projected/6f1385f6-5258-4372-a20a-30a7229ec2e8-kube-api-access-v6x86\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.487577 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-scripts\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.487609 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx7qb\" (UniqueName: \"kubernetes.io/projected/573eba52-c038-42e0-89a7-4791962151a4-kube-api-access-vx7qb\") pod \"neutron-db-sync-7b7rr\" (UID: \"573eba52-c038-42e0-89a7-4791962151a4\") " pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.487664 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-db-sync-config-data\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.487697 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f1385f6-5258-4372-a20a-30a7229ec2e8-etc-machine-id\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.487718 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-config-data\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.490977 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-config-data\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.492717 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f1385f6-5258-4372-a20a-30a7229ec2e8-etc-machine-id\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.501816 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/573eba52-c038-42e0-89a7-4791962151a4-config\") pod \"neutron-db-sync-7b7rr\" (UID: \"573eba52-c038-42e0-89a7-4791962151a4\") " pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.505775 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-scripts\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.509253 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573eba52-c038-42e0-89a7-4791962151a4-combined-ca-bundle\") pod \"neutron-db-sync-7b7rr\" (UID: \"573eba52-c038-42e0-89a7-4791962151a4\") " pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.511215 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qm4lm"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.511622 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-db-sync-config-data\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.515313 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-combined-ca-bundle\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.526591 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5666447f7c-7kf4h"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.528107 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.552676 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6x86\" (UniqueName: \"kubernetes.io/projected/6f1385f6-5258-4372-a20a-30a7229ec2e8-kube-api-access-v6x86\") pod \"cinder-db-sync-ggvxs\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.555962 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx7qb\" (UniqueName: \"kubernetes.io/projected/573eba52-c038-42e0-89a7-4791962151a4-kube-api-access-vx7qb\") pod \"neutron-db-sync-7b7rr\" (UID: \"573eba52-c038-42e0-89a7-4791962151a4\") " pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.564240 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.564578 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d8846475-ghcrk"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.579198 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5666447f7c-7kf4h"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.589757 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t57v\" (UniqueName: \"kubernetes.io/projected/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-kube-api-access-6t57v\") pod \"barbican-db-sync-qm4lm\" (UID: \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\") " pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.589801 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-combined-ca-bundle\") pod \"barbican-db-sync-qm4lm\" (UID: \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\") " pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.589835 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.589854 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-config-data\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.589867 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7r8\" (UniqueName: \"kubernetes.io/projected/b2c0ac8f-2b76-45a3-af85-5990913bc03a-kube-api-access-gx7r8\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.589942 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2c0ac8f-2b76-45a3-af85-5990913bc03a-run-httpd\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.590105 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.590138 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2c0ac8f-2b76-45a3-af85-5990913bc03a-log-httpd\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.590164 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-scripts\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.590215 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-db-sync-config-data\") pod \"barbican-db-sync-qm4lm\" (UID: \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\") " pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.605364 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-v2dqt" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.637264 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fc948c8c-tbrq2"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.638759 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.678236 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fc948c8c-tbrq2"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.703861 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.703892 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2c0ac8f-2b76-45a3-af85-5990913bc03a-log-httpd\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.703916 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-scripts\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.703938 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0204e2ef-b54e-40fd-a896-d366754a5b5f-logs\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.703964 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-db-sync-config-data\") pod \"barbican-db-sync-qm4lm\" (UID: \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\") " pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.704009 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t57v\" (UniqueName: \"kubernetes.io/projected/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-kube-api-access-6t57v\") pod \"barbican-db-sync-qm4lm\" (UID: \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\") " pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.704030 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-combined-ca-bundle\") pod \"barbican-db-sync-qm4lm\" (UID: \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\") " pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.704046 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.704062 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-config-data\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.704077 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx7r8\" (UniqueName: \"kubernetes.io/projected/b2c0ac8f-2b76-45a3-af85-5990913bc03a-kube-api-access-gx7r8\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.704093 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0204e2ef-b54e-40fd-a896-d366754a5b5f-scripts\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.704128 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgmsm\" (UniqueName: \"kubernetes.io/projected/0204e2ef-b54e-40fd-a896-d366754a5b5f-kube-api-access-fgmsm\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.704147 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0204e2ef-b54e-40fd-a896-d366754a5b5f-horizon-secret-key\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.704179 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2c0ac8f-2b76-45a3-af85-5990913bc03a-run-httpd\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.704239 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0204e2ef-b54e-40fd-a896-d366754a5b5f-config-data\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.710069 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2c0ac8f-2b76-45a3-af85-5990913bc03a-log-httpd\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.723114 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2c0ac8f-2b76-45a3-af85-5990913bc03a-run-httpd\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.735663 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t57v\" (UniqueName: \"kubernetes.io/projected/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-kube-api-access-6t57v\") pod \"barbican-db-sync-qm4lm\" (UID: \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\") " pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.743745 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-combined-ca-bundle\") pod \"barbican-db-sync-qm4lm\" (UID: \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\") " pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.752052 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-scripts\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.752563 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.765102 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx7r8\" (UniqueName: \"kubernetes.io/projected/b2c0ac8f-2b76-45a3-af85-5990913bc03a-kube-api-access-gx7r8\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.770708 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-nsg2v"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.771910 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.800849 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.800946 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kc4sk" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.801077 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.806730 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-ovsdbserver-sb\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.806775 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-ovsdbserver-nb\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.806796 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-dns-svc\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.806850 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0204e2ef-b54e-40fd-a896-d366754a5b5f-scripts\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.806879 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgmsm\" (UniqueName: \"kubernetes.io/projected/0204e2ef-b54e-40fd-a896-d366754a5b5f-kube-api-access-fgmsm\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.806899 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmdhh\" (UniqueName: \"kubernetes.io/projected/adba5246-4615-4faf-be59-601713f619bf-kube-api-access-nmdhh\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.806918 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0204e2ef-b54e-40fd-a896-d366754a5b5f-horizon-secret-key\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.806984 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-config\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.807003 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0204e2ef-b54e-40fd-a896-d366754a5b5f-config-data\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.807034 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0204e2ef-b54e-40fd-a896-d366754a5b5f-logs\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.807387 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0204e2ef-b54e-40fd-a896-d366754a5b5f-logs\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.821704 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-db-sync-config-data\") pod \"barbican-db-sync-qm4lm\" (UID: \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\") " pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.825679 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nsg2v"] Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.825890 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.826972 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0204e2ef-b54e-40fd-a896-d366754a5b5f-config-data\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.831049 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0204e2ef-b54e-40fd-a896-d366754a5b5f-scripts\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.831509 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.832917 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-config-data\") pod \"ceilometer-0\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.848559 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.848855 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0204e2ef-b54e-40fd-a896-d366754a5b5f-horizon-secret-key\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.894068 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.909680 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-ovsdbserver-sb\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.910373 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-ovsdbserver-sb\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.935773 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgmsm\" (UniqueName: \"kubernetes.io/projected/0204e2ef-b54e-40fd-a896-d366754a5b5f-kube-api-access-fgmsm\") pod \"horizon-5666447f7c-7kf4h\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.942092 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-ovsdbserver-nb\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.951287 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-dns-svc\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.951373 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27rn\" (UniqueName: \"kubernetes.io/projected/11533631-6479-4f8b-baaf-b1c71de4a966-kube-api-access-b27rn\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.951496 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmdhh\" (UniqueName: \"kubernetes.io/projected/adba5246-4615-4faf-be59-601713f619bf-kube-api-access-nmdhh\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.951565 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-scripts\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.951618 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11533631-6479-4f8b-baaf-b1c71de4a966-logs\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.951737 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-config\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.951778 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-combined-ca-bundle\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.951812 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-config-data\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.945448 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-ovsdbserver-nb\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.952546 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-dns-svc\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.955408 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-config\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:41 crc kubenswrapper[4482]: I1125 07:02:41.989907 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmdhh\" (UniqueName: \"kubernetes.io/projected/adba5246-4615-4faf-be59-601713f619bf-kube-api-access-nmdhh\") pod \"dnsmasq-dns-67fc948c8c-tbrq2\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.025828 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.054204 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-combined-ca-bundle\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.054332 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-config-data\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.054557 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b27rn\" (UniqueName: \"kubernetes.io/projected/11533631-6479-4f8b-baaf-b1c71de4a966-kube-api-access-b27rn\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.058409 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-scripts\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.058499 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11533631-6479-4f8b-baaf-b1c71de4a966-logs\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.059193 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-combined-ca-bundle\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.059404 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11533631-6479-4f8b-baaf-b1c71de4a966-logs\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.061133 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-scripts\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.070705 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-config-data\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.077097 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b27rn\" (UniqueName: \"kubernetes.io/projected/11533631-6479-4f8b-baaf-b1c71de4a966-kube-api-access-b27rn\") pod \"placement-db-sync-nsg2v\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.175319 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nsg2v" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.221401 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.257269 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d8846475-ghcrk"] Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.269461 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76cc5bdc65-wzwtb"] Nov 25 07:02:42 crc kubenswrapper[4482]: W1125 07:02:42.277303 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55ccee6e_6831_4f26_b3f3_5c6de363adb8.slice/crio-4e5714afc124c6e98ee907f68a16d32b61750f17cf1b9efb9a331d9b0d00bf1b WatchSource:0}: Error finding container 4e5714afc124c6e98ee907f68a16d32b61750f17cf1b9efb9a331d9b0d00bf1b: Status 404 returned error can't find the container with id 4e5714afc124c6e98ee907f68a16d32b61750f17cf1b9efb9a331d9b0d00bf1b Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.386163 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-blhln"] Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.411297 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"01256899f25b22641a51017dcd75d52fa04bed358535981e88e6bd67d5993071"} Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.411366 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"cbccc7a326da0efabb83ab814dd9c5b04f38d9c8825422b59771445ff7e5c70d"} Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.446567 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76cc5bdc65-wzwtb" event={"ID":"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1","Type":"ContainerStarted","Data":"b66254d166d6319c707e0dffdc8870b438f9992483734cb1863c72ac7f46c018"} Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.462846 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d8846475-ghcrk" event={"ID":"55ccee6e-6831-4f26-b3f3-5c6de363adb8","Type":"ContainerStarted","Data":"4e5714afc124c6e98ee907f68a16d32b61750f17cf1b9efb9a331d9b0d00bf1b"} Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.742407 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7b7rr"] Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.776753 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qm4lm"] Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.793537 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ggvxs"] Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.814519 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-v2dqt"] Nov 25 07:02:42 crc kubenswrapper[4482]: I1125 07:02:42.833352 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:02:42 crc kubenswrapper[4482]: W1125 07:02:42.870650 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2c0ac8f_2b76_45a3_af85_5990913bc03a.slice/crio-8fd933553d12649d14af6df1346f4151f38368f80bf42e562bd2db1971aa80a8 WatchSource:0}: Error finding container 8fd933553d12649d14af6df1346f4151f38368f80bf42e562bd2db1971aa80a8: Status 404 returned error can't find the container with id 8fd933553d12649d14af6df1346f4151f38368f80bf42e562bd2db1971aa80a8 Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.104869 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nsg2v"] Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.128221 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5666447f7c-7kf4h"] Nov 25 07:02:43 crc kubenswrapper[4482]: W1125 07:02:43.132204 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11533631_6479_4f8b_baaf_b1c71de4a966.slice/crio-a18d1caca218408bf7e96770a225bea12c87c148661ee91deeefbbb8c5199b00 WatchSource:0}: Error finding container a18d1caca218408bf7e96770a225bea12c87c148661ee91deeefbbb8c5199b00: Status 404 returned error can't find the container with id a18d1caca218408bf7e96770a225bea12c87c148661ee91deeefbbb8c5199b00 Nov 25 07:02:43 crc kubenswrapper[4482]: W1125 07:02:43.144504 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0204e2ef_b54e_40fd_a896_d366754a5b5f.slice/crio-e8bbbadee526ba1b69fc08ba3da366e060d1152ab5cce94d510fff496bc72bc9 WatchSource:0}: Error finding container e8bbbadee526ba1b69fc08ba3da366e060d1152ab5cce94d510fff496bc72bc9: Status 404 returned error can't find the container with id e8bbbadee526ba1b69fc08ba3da366e060d1152ab5cce94d510fff496bc72bc9 Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.170328 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fc948c8c-tbrq2"] Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.475649 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qm4lm" event={"ID":"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f","Type":"ContainerStarted","Data":"f932cee4fa0b02d63a97e12a7b7baf7c3f6509094614d6f7deb7ce9f2808b31d"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.477157 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-blhln" event={"ID":"be3949da-bc32-48f7-8330-031cc2de23e4","Type":"ContainerStarted","Data":"00540160539f48823ad922bb8b446774532ea892d3bbad9145b32cafa55fc6ea"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.477203 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-blhln" event={"ID":"be3949da-bc32-48f7-8330-031cc2de23e4","Type":"ContainerStarted","Data":"a460291ad640fd09f202812bb0fd24ae6c4b4143181e38cd3d2458aa65a9ef0d"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.481232 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7b7rr" event={"ID":"573eba52-c038-42e0-89a7-4791962151a4","Type":"ContainerStarted","Data":"7caca70f49e5acd7a27569de6c3729ad30f554a367594838e2cb7e93f9f3dc80"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.481261 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7b7rr" event={"ID":"573eba52-c038-42e0-89a7-4791962151a4","Type":"ContainerStarted","Data":"1321eda47494c4c41ac5515c0f23a561991608a113ed67c36143c84110ca03ac"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.483775 4482 generic.go:334] "Generic (PLEG): container finished" podID="adba5246-4615-4faf-be59-601713f619bf" containerID="7c9862735a9d7054c5801097e686112141024c5aba39cc1ae42c95536ac3aea1" exitCode=0 Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.483828 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" event={"ID":"adba5246-4615-4faf-be59-601713f619bf","Type":"ContainerDied","Data":"7c9862735a9d7054c5801097e686112141024c5aba39cc1ae42c95536ac3aea1"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.483845 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" event={"ID":"adba5246-4615-4faf-be59-601713f619bf","Type":"ContainerStarted","Data":"58081776b7de2c170e2f04f3b633b6ca05f5e10733b7cf0fd26e39e7572ecc8e"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.493342 4482 generic.go:334] "Generic (PLEG): container finished" podID="55ccee6e-6831-4f26-b3f3-5c6de363adb8" containerID="258dd8e2d8b62c29c592d1ec770f5cc6abc5d8d55859e4aba9603f23a1be369d" exitCode=0 Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.493455 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d8846475-ghcrk" event={"ID":"55ccee6e-6831-4f26-b3f3-5c6de363adb8","Type":"ContainerDied","Data":"258dd8e2d8b62c29c592d1ec770f5cc6abc5d8d55859e4aba9603f23a1be369d"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.529667 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-blhln" podStartSLOduration=3.529647096 podStartE2EDuration="3.529647096s" podCreationTimestamp="2025-11-25 07:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:02:43.513378646 +0000 UTC m=+938.001609906" watchObservedRunningTime="2025-11-25 07:02:43.529647096 +0000 UTC m=+938.017878355" Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.596755 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"844cd9cc42a838e329ca21de677ff8a61f7a42c21f28817ec1a2972ced9b502b"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.597109 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"21d6404f-f801-4230-af65-d110706155c6","Type":"ContainerStarted","Data":"82b112cf41b083392f5c3979ab8897e999865833dd5870dbd48b211a574342c6"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.603303 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-7b7rr" podStartSLOduration=2.603280153 podStartE2EDuration="2.603280153s" podCreationTimestamp="2025-11-25 07:02:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:02:43.583605113 +0000 UTC m=+938.071836372" watchObservedRunningTime="2025-11-25 07:02:43.603280153 +0000 UTC m=+938.091511413" Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.621771 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ggvxs" event={"ID":"6f1385f6-5258-4372-a20a-30a7229ec2e8","Type":"ContainerStarted","Data":"c69b601921ad69eda2a72a0c54d4da7c58c0aed6349939cd194e1e2dab3939be"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.625267 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5666447f7c-7kf4h" event={"ID":"0204e2ef-b54e-40fd-a896-d366754a5b5f","Type":"ContainerStarted","Data":"e8bbbadee526ba1b69fc08ba3da366e060d1152ab5cce94d510fff496bc72bc9"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.635966 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2c0ac8f-2b76-45a3-af85-5990913bc03a","Type":"ContainerStarted","Data":"8fd933553d12649d14af6df1346f4151f38368f80bf42e562bd2db1971aa80a8"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.648521 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nsg2v" event={"ID":"11533631-6479-4f8b-baaf-b1c71de4a966","Type":"ContainerStarted","Data":"a18d1caca218408bf7e96770a225bea12c87c148661ee91deeefbbb8c5199b00"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.669794 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.506281314 podStartE2EDuration="47.669769967s" podCreationTimestamp="2025-11-25 07:01:56 +0000 UTC" firstStartedPulling="2025-11-25 07:02:30.509874372 +0000 UTC m=+924.998105631" lastFinishedPulling="2025-11-25 07:02:39.673362995 +0000 UTC m=+934.161594284" observedRunningTime="2025-11-25 07:02:43.667927213 +0000 UTC m=+938.156158472" watchObservedRunningTime="2025-11-25 07:02:43.669769967 +0000 UTC m=+938.158001226" Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.703366 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-v2dqt" event={"ID":"3e50321d-a59a-4d39-a485-4299ced13bdc","Type":"ContainerStarted","Data":"293862be2d07354e45d1c5f184705d0f03baf24cace4ea5e457140020cc76a92"} Nov 25 07:02:43 crc kubenswrapper[4482]: I1125 07:02:43.985155 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fc948c8c-tbrq2"] Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.000009 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.004841 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698b5d6cf7-cn5k5"] Nov 25 07:02:44 crc kubenswrapper[4482]: E1125 07:02:44.005226 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ccee6e-6831-4f26-b3f3-5c6de363adb8" containerName="init" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.005241 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ccee6e-6831-4f26-b3f3-5c6de363adb8" containerName="init" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.009560 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="55ccee6e-6831-4f26-b3f3-5c6de363adb8" containerName="init" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.010818 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.019345 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.073291 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698b5d6cf7-cn5k5"] Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.120440 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-ovsdbserver-sb\") pod \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.120578 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-ovsdbserver-nb\") pod \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.120696 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-dns-svc\") pod \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.120776 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-config\") pod \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.120816 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fcxt\" (UniqueName: \"kubernetes.io/projected/55ccee6e-6831-4f26-b3f3-5c6de363adb8-kube-api-access-4fcxt\") pod \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\" (UID: \"55ccee6e-6831-4f26-b3f3-5c6de363adb8\") " Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.121079 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-ovsdbserver-sb\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.121566 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-config\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.121614 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzk7q\" (UniqueName: \"kubernetes.io/projected/9ed040b0-24c3-4b02-aefb-a7eaced9d994-kube-api-access-vzk7q\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.121649 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-ovsdbserver-nb\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.121676 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-dns-swift-storage-0\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.121722 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-dns-svc\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.139501 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ccee6e-6831-4f26-b3f3-5c6de363adb8-kube-api-access-4fcxt" (OuterVolumeSpecName: "kube-api-access-4fcxt") pod "55ccee6e-6831-4f26-b3f3-5c6de363adb8" (UID: "55ccee6e-6831-4f26-b3f3-5c6de363adb8"). InnerVolumeSpecName "kube-api-access-4fcxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.157066 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-config" (OuterVolumeSpecName: "config") pod "55ccee6e-6831-4f26-b3f3-5c6de363adb8" (UID: "55ccee6e-6831-4f26-b3f3-5c6de363adb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.157686 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "55ccee6e-6831-4f26-b3f3-5c6de363adb8" (UID: "55ccee6e-6831-4f26-b3f3-5c6de363adb8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.159778 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55ccee6e-6831-4f26-b3f3-5c6de363adb8" (UID: "55ccee6e-6831-4f26-b3f3-5c6de363adb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.168155 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "55ccee6e-6831-4f26-b3f3-5c6de363adb8" (UID: "55ccee6e-6831-4f26-b3f3-5c6de363adb8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.223098 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-dns-svc\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.223252 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-ovsdbserver-sb\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.223419 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-config\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.223460 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzk7q\" (UniqueName: \"kubernetes.io/projected/9ed040b0-24c3-4b02-aefb-a7eaced9d994-kube-api-access-vzk7q\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.223490 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-ovsdbserver-nb\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.223514 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-dns-swift-storage-0\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.223596 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.223606 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.223614 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fcxt\" (UniqueName: \"kubernetes.io/projected/55ccee6e-6831-4f26-b3f3-5c6de363adb8-kube-api-access-4fcxt\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.223623 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.223630 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ccee6e-6831-4f26-b3f3-5c6de363adb8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.224245 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-config\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.224725 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-ovsdbserver-nb\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.224782 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-dns-svc\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.225238 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-ovsdbserver-sb\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.225839 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-dns-swift-storage-0\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.260428 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzk7q\" (UniqueName: \"kubernetes.io/projected/9ed040b0-24c3-4b02-aefb-a7eaced9d994-kube-api-access-vzk7q\") pod \"dnsmasq-dns-698b5d6cf7-cn5k5\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.274688 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76cc5bdc65-wzwtb"] Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.288406 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.317435 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78d554fc8c-f2fdb"] Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.318820 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.365605 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.399075 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78d554fc8c-f2fdb"] Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.430323 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-logs\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.430642 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-scripts\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.430676 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8bvc\" (UniqueName: \"kubernetes.io/projected/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-kube-api-access-g8bvc\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.430716 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-horizon-secret-key\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.430845 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-config-data\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.533538 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-scripts\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.533602 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8bvc\" (UniqueName: \"kubernetes.io/projected/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-kube-api-access-g8bvc\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.533653 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-horizon-secret-key\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.533838 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-config-data\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.533898 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-logs\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.534366 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-logs\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.535081 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-scripts\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.536725 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-config-data\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.543669 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-horizon-secret-key\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.549755 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8bvc\" (UniqueName: \"kubernetes.io/projected/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-kube-api-access-g8bvc\") pod \"horizon-78d554fc8c-f2fdb\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.636090 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.746127 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" event={"ID":"adba5246-4615-4faf-be59-601713f619bf","Type":"ContainerStarted","Data":"b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b"} Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.749205 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.759539 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d8846475-ghcrk" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.760507 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d8846475-ghcrk" event={"ID":"55ccee6e-6831-4f26-b3f3-5c6de363adb8","Type":"ContainerDied","Data":"4e5714afc124c6e98ee907f68a16d32b61750f17cf1b9efb9a331d9b0d00bf1b"} Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.760557 4482 scope.go:117] "RemoveContainer" containerID="258dd8e2d8b62c29c592d1ec770f5cc6abc5d8d55859e4aba9603f23a1be369d" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.809954 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" podStartSLOduration=3.80993934 podStartE2EDuration="3.80993934s" podCreationTimestamp="2025-11-25 07:02:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:02:44.767188158 +0000 UTC m=+939.255419417" watchObservedRunningTime="2025-11-25 07:02:44.80993934 +0000 UTC m=+939.298170629" Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.846428 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d8846475-ghcrk"] Nov 25 07:02:44 crc kubenswrapper[4482]: I1125 07:02:44.848457 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66d8846475-ghcrk"] Nov 25 07:02:45 crc kubenswrapper[4482]: I1125 07:02:45.013726 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698b5d6cf7-cn5k5"] Nov 25 07:02:45 crc kubenswrapper[4482]: W1125 07:02:45.019599 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ed040b0_24c3_4b02_aefb_a7eaced9d994.slice/crio-0ef24f5f8612b502168a39bd41cbbe205644825fd8f30e6839570a06ce3ea645 WatchSource:0}: Error finding container 0ef24f5f8612b502168a39bd41cbbe205644825fd8f30e6839570a06ce3ea645: Status 404 returned error can't find the container with id 0ef24f5f8612b502168a39bd41cbbe205644825fd8f30e6839570a06ce3ea645 Nov 25 07:02:45 crc kubenswrapper[4482]: I1125 07:02:45.258093 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78d554fc8c-f2fdb"] Nov 25 07:02:45 crc kubenswrapper[4482]: W1125 07:02:45.283448 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod961bd3cf_55d9_48b0_8f63_a8c2c2942c41.slice/crio-da65c75ab379384341d22f4f0f222fc35f34a5dbdfeafcfec05d07ff228cc94c WatchSource:0}: Error finding container da65c75ab379384341d22f4f0f222fc35f34a5dbdfeafcfec05d07ff228cc94c: Status 404 returned error can't find the container with id da65c75ab379384341d22f4f0f222fc35f34a5dbdfeafcfec05d07ff228cc94c Nov 25 07:02:45 crc kubenswrapper[4482]: I1125 07:02:45.785091 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d554fc8c-f2fdb" event={"ID":"961bd3cf-55d9-48b0-8f63-a8c2c2942c41","Type":"ContainerStarted","Data":"da65c75ab379384341d22f4f0f222fc35f34a5dbdfeafcfec05d07ff228cc94c"} Nov 25 07:02:45 crc kubenswrapper[4482]: I1125 07:02:45.794633 4482 generic.go:334] "Generic (PLEG): container finished" podID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" containerID="ce8237e8475643c4c94365f50613555290c1393f92607459d516c0e107d255cb" exitCode=0 Nov 25 07:02:45 crc kubenswrapper[4482]: I1125 07:02:45.794715 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" event={"ID":"9ed040b0-24c3-4b02-aefb-a7eaced9d994","Type":"ContainerDied","Data":"ce8237e8475643c4c94365f50613555290c1393f92607459d516c0e107d255cb"} Nov 25 07:02:45 crc kubenswrapper[4482]: I1125 07:02:45.794811 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" event={"ID":"9ed040b0-24c3-4b02-aefb-a7eaced9d994","Type":"ContainerStarted","Data":"0ef24f5f8612b502168a39bd41cbbe205644825fd8f30e6839570a06ce3ea645"} Nov 25 07:02:45 crc kubenswrapper[4482]: I1125 07:02:45.794890 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" podUID="adba5246-4615-4faf-be59-601713f619bf" containerName="dnsmasq-dns" containerID="cri-o://b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b" gracePeriod=10 Nov 25 07:02:45 crc kubenswrapper[4482]: I1125 07:02:45.862625 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ccee6e-6831-4f26-b3f3-5c6de363adb8" path="/var/lib/kubelet/pods/55ccee6e-6831-4f26-b3f3-5c6de363adb8/volumes" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.279311 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.390241 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-config\") pod \"adba5246-4615-4faf-be59-601713f619bf\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.390408 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-dns-svc\") pod \"adba5246-4615-4faf-be59-601713f619bf\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.390663 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmdhh\" (UniqueName: \"kubernetes.io/projected/adba5246-4615-4faf-be59-601713f619bf-kube-api-access-nmdhh\") pod \"adba5246-4615-4faf-be59-601713f619bf\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.390774 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-ovsdbserver-nb\") pod \"adba5246-4615-4faf-be59-601713f619bf\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.390809 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-ovsdbserver-sb\") pod \"adba5246-4615-4faf-be59-601713f619bf\" (UID: \"adba5246-4615-4faf-be59-601713f619bf\") " Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.404334 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adba5246-4615-4faf-be59-601713f619bf-kube-api-access-nmdhh" (OuterVolumeSpecName: "kube-api-access-nmdhh") pod "adba5246-4615-4faf-be59-601713f619bf" (UID: "adba5246-4615-4faf-be59-601713f619bf"). InnerVolumeSpecName "kube-api-access-nmdhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.452962 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "adba5246-4615-4faf-be59-601713f619bf" (UID: "adba5246-4615-4faf-be59-601713f619bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.455048 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "adba5246-4615-4faf-be59-601713f619bf" (UID: "adba5246-4615-4faf-be59-601713f619bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.481138 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "adba5246-4615-4faf-be59-601713f619bf" (UID: "adba5246-4615-4faf-be59-601713f619bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.496944 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmdhh\" (UniqueName: \"kubernetes.io/projected/adba5246-4615-4faf-be59-601713f619bf-kube-api-access-nmdhh\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.496968 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.496977 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.496986 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.498307 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-config" (OuterVolumeSpecName: "config") pod "adba5246-4615-4faf-be59-601713f619bf" (UID: "adba5246-4615-4faf-be59-601713f619bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.600641 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adba5246-4615-4faf-be59-601713f619bf-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.806241 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" event={"ID":"9ed040b0-24c3-4b02-aefb-a7eaced9d994","Type":"ContainerStarted","Data":"cc9522752075bcba35687a9363077030121b0489413b1b8a70a9aecd148b1783"} Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.807680 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.811678 4482 generic.go:334] "Generic (PLEG): container finished" podID="adba5246-4615-4faf-be59-601713f619bf" containerID="b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b" exitCode=0 Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.811736 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" event={"ID":"adba5246-4615-4faf-be59-601713f619bf","Type":"ContainerDied","Data":"b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b"} Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.811761 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.811783 4482 scope.go:117] "RemoveContainer" containerID="b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.811769 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fc948c8c-tbrq2" event={"ID":"adba5246-4615-4faf-be59-601713f619bf","Type":"ContainerDied","Data":"58081776b7de2c170e2f04f3b633b6ca05f5e10733b7cf0fd26e39e7572ecc8e"} Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.822245 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" podStartSLOduration=3.822231831 podStartE2EDuration="3.822231831s" podCreationTimestamp="2025-11-25 07:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:02:46.821386176 +0000 UTC m=+941.309617436" watchObservedRunningTime="2025-11-25 07:02:46.822231831 +0000 UTC m=+941.310463090" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.853357 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fc948c8c-tbrq2"] Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.862306 4482 scope.go:117] "RemoveContainer" containerID="7c9862735a9d7054c5801097e686112141024c5aba39cc1ae42c95536ac3aea1" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.862632 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fc948c8c-tbrq2"] Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.893475 4482 scope.go:117] "RemoveContainer" containerID="b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b" Nov 25 07:02:46 crc kubenswrapper[4482]: E1125 07:02:46.895910 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b\": container with ID starting with b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b not found: ID does not exist" containerID="b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.895946 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b"} err="failed to get container status \"b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b\": rpc error: code = NotFound desc = could not find container \"b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b\": container with ID starting with b14f190d497c51c5bb0539610efc1a32693f012ac5cf1a023c0020bda62bde3b not found: ID does not exist" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.895973 4482 scope.go:117] "RemoveContainer" containerID="7c9862735a9d7054c5801097e686112141024c5aba39cc1ae42c95536ac3aea1" Nov 25 07:02:46 crc kubenswrapper[4482]: E1125 07:02:46.899871 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c9862735a9d7054c5801097e686112141024c5aba39cc1ae42c95536ac3aea1\": container with ID starting with 7c9862735a9d7054c5801097e686112141024c5aba39cc1ae42c95536ac3aea1 not found: ID does not exist" containerID="7c9862735a9d7054c5801097e686112141024c5aba39cc1ae42c95536ac3aea1" Nov 25 07:02:46 crc kubenswrapper[4482]: I1125 07:02:46.899905 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9862735a9d7054c5801097e686112141024c5aba39cc1ae42c95536ac3aea1"} err="failed to get container status \"7c9862735a9d7054c5801097e686112141024c5aba39cc1ae42c95536ac3aea1\": rpc error: code = NotFound desc = could not find container \"7c9862735a9d7054c5801097e686112141024c5aba39cc1ae42c95536ac3aea1\": container with ID starting with 7c9862735a9d7054c5801097e686112141024c5aba39cc1ae42c95536ac3aea1 not found: ID does not exist" Nov 25 07:02:46 crc kubenswrapper[4482]: E1125 07:02:46.915457 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadba5246_4615_4faf_be59_601713f619bf.slice\": RecentStats: unable to find data in memory cache]" Nov 25 07:02:47 crc kubenswrapper[4482]: I1125 07:02:47.831646 4482 generic.go:334] "Generic (PLEG): container finished" podID="be3949da-bc32-48f7-8330-031cc2de23e4" containerID="00540160539f48823ad922bb8b446774532ea892d3bbad9145b32cafa55fc6ea" exitCode=0 Nov 25 07:02:47 crc kubenswrapper[4482]: I1125 07:02:47.848399 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adba5246-4615-4faf-be59-601713f619bf" path="/var/lib/kubelet/pods/adba5246-4615-4faf-be59-601713f619bf/volumes" Nov 25 07:02:47 crc kubenswrapper[4482]: I1125 07:02:47.849084 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-blhln" event={"ID":"be3949da-bc32-48f7-8330-031cc2de23e4","Type":"ContainerDied","Data":"00540160539f48823ad922bb8b446774532ea892d3bbad9145b32cafa55fc6ea"} Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.204528 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.371582 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-scripts\") pod \"be3949da-bc32-48f7-8330-031cc2de23e4\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.371777 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-combined-ca-bundle\") pod \"be3949da-bc32-48f7-8330-031cc2de23e4\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.371819 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-config-data\") pod \"be3949da-bc32-48f7-8330-031cc2de23e4\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.371870 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-fernet-keys\") pod \"be3949da-bc32-48f7-8330-031cc2de23e4\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.371887 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-credential-keys\") pod \"be3949da-bc32-48f7-8330-031cc2de23e4\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.371973 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs8dm\" (UniqueName: \"kubernetes.io/projected/be3949da-bc32-48f7-8330-031cc2de23e4-kube-api-access-xs8dm\") pod \"be3949da-bc32-48f7-8330-031cc2de23e4\" (UID: \"be3949da-bc32-48f7-8330-031cc2de23e4\") " Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.377598 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "be3949da-bc32-48f7-8330-031cc2de23e4" (UID: "be3949da-bc32-48f7-8330-031cc2de23e4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.378689 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "be3949da-bc32-48f7-8330-031cc2de23e4" (UID: "be3949da-bc32-48f7-8330-031cc2de23e4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.385244 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be3949da-bc32-48f7-8330-031cc2de23e4-kube-api-access-xs8dm" (OuterVolumeSpecName: "kube-api-access-xs8dm") pod "be3949da-bc32-48f7-8330-031cc2de23e4" (UID: "be3949da-bc32-48f7-8330-031cc2de23e4"). InnerVolumeSpecName "kube-api-access-xs8dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.390025 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-scripts" (OuterVolumeSpecName: "scripts") pod "be3949da-bc32-48f7-8330-031cc2de23e4" (UID: "be3949da-bc32-48f7-8330-031cc2de23e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.396415 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-config-data" (OuterVolumeSpecName: "config-data") pod "be3949da-bc32-48f7-8330-031cc2de23e4" (UID: "be3949da-bc32-48f7-8330-031cc2de23e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.400855 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be3949da-bc32-48f7-8330-031cc2de23e4" (UID: "be3949da-bc32-48f7-8330-031cc2de23e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.474801 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs8dm\" (UniqueName: \"kubernetes.io/projected/be3949da-bc32-48f7-8330-031cc2de23e4-kube-api-access-xs8dm\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.475143 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.475155 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.475196 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.475207 4482 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.475216 4482 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/be3949da-bc32-48f7-8330-031cc2de23e4-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.860519 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-blhln" event={"ID":"be3949da-bc32-48f7-8330-031cc2de23e4","Type":"ContainerDied","Data":"a460291ad640fd09f202812bb0fd24ae6c4b4143181e38cd3d2458aa65a9ef0d"} Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.860559 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a460291ad640fd09f202812bb0fd24ae6c4b4143181e38cd3d2458aa65a9ef0d" Nov 25 07:02:49 crc kubenswrapper[4482]: I1125 07:02:49.860574 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-blhln" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.029948 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-blhln"] Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.034982 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-blhln"] Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.141670 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mrd6z"] Nov 25 07:02:50 crc kubenswrapper[4482]: E1125 07:02:50.142003 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be3949da-bc32-48f7-8330-031cc2de23e4" containerName="keystone-bootstrap" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.142017 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="be3949da-bc32-48f7-8330-031cc2de23e4" containerName="keystone-bootstrap" Nov 25 07:02:50 crc kubenswrapper[4482]: E1125 07:02:50.142025 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adba5246-4615-4faf-be59-601713f619bf" containerName="dnsmasq-dns" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.142031 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="adba5246-4615-4faf-be59-601713f619bf" containerName="dnsmasq-dns" Nov 25 07:02:50 crc kubenswrapper[4482]: E1125 07:02:50.142041 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adba5246-4615-4faf-be59-601713f619bf" containerName="init" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.142046 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="adba5246-4615-4faf-be59-601713f619bf" containerName="init" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.142234 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="adba5246-4615-4faf-be59-601713f619bf" containerName="dnsmasq-dns" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.142257 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="be3949da-bc32-48f7-8330-031cc2de23e4" containerName="keystone-bootstrap" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.143584 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.145302 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.145454 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.145657 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.146077 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.146787 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nl4pz" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.160059 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mrd6z"] Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.204565 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-config-data\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.204611 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4ds4\" (UniqueName: \"kubernetes.io/projected/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-kube-api-access-c4ds4\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.204703 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-scripts\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.204748 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-combined-ca-bundle\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.204790 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-fernet-keys\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.204863 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-credential-keys\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.306382 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-credential-keys\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.306452 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-config-data\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.306470 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4ds4\" (UniqueName: \"kubernetes.io/projected/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-kube-api-access-c4ds4\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.306508 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-scripts\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.306545 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-combined-ca-bundle\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.306571 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-fernet-keys\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.311691 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-config-data\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.322873 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-scripts\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.323004 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-credential-keys\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.323162 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-fernet-keys\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.326080 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4ds4\" (UniqueName: \"kubernetes.io/projected/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-kube-api-access-c4ds4\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.340357 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-combined-ca-bundle\") pod \"keystone-bootstrap-mrd6z\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.457035 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:50 crc kubenswrapper[4482]: I1125 07:02:50.939307 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mrd6z"] Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.223968 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5666447f7c-7kf4h"] Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.263926 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fbb9df54d-nfljm"] Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.266034 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.270804 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.281952 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fbb9df54d-nfljm"] Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.343636 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-combined-ca-bundle\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.343741 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-horizon-secret-key\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.344240 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-config-data\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.344387 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-logs\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.344457 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-scripts\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.344576 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fwz\" (UniqueName: \"kubernetes.io/projected/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-kube-api-access-z5fwz\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.344609 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-horizon-tls-certs\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.400858 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78d554fc8c-f2fdb"] Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.435620 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7949b4656d-jjsj8"] Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.438035 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.473913 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5fwz\" (UniqueName: \"kubernetes.io/projected/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-kube-api-access-z5fwz\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.473991 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-horizon-tls-certs\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.474162 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-combined-ca-bundle\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.474457 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-horizon-secret-key\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.474504 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-config-data\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.474647 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-logs\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.475013 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-scripts\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.480419 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-scripts\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.481488 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-config-data\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.482348 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7949b4656d-jjsj8"] Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.482676 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-logs\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.491365 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-horizon-tls-certs\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.494706 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-combined-ca-bundle\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.495092 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-horizon-secret-key\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.500508 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5fwz\" (UniqueName: \"kubernetes.io/projected/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-kube-api-access-z5fwz\") pod \"horizon-5fbb9df54d-nfljm\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.576910 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5634033-0ed5-4a52-9d37-a52ce07e4f50-logs\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.577314 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5634033-0ed5-4a52-9d37-a52ce07e4f50-horizon-tls-certs\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.577472 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e5634033-0ed5-4a52-9d37-a52ce07e4f50-horizon-secret-key\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.577531 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5634033-0ed5-4a52-9d37-a52ce07e4f50-scripts\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.577654 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7g7r\" (UniqueName: \"kubernetes.io/projected/e5634033-0ed5-4a52-9d37-a52ce07e4f50-kube-api-access-s7g7r\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.577760 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5634033-0ed5-4a52-9d37-a52ce07e4f50-combined-ca-bundle\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.577807 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5634033-0ed5-4a52-9d37-a52ce07e4f50-config-data\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.594044 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.687831 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5634033-0ed5-4a52-9d37-a52ce07e4f50-horizon-tls-certs\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.687904 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e5634033-0ed5-4a52-9d37-a52ce07e4f50-horizon-secret-key\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.687932 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5634033-0ed5-4a52-9d37-a52ce07e4f50-scripts\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.687978 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7g7r\" (UniqueName: \"kubernetes.io/projected/e5634033-0ed5-4a52-9d37-a52ce07e4f50-kube-api-access-s7g7r\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.688022 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5634033-0ed5-4a52-9d37-a52ce07e4f50-combined-ca-bundle\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.688052 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5634033-0ed5-4a52-9d37-a52ce07e4f50-config-data\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.688108 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5634033-0ed5-4a52-9d37-a52ce07e4f50-logs\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.691144 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5634033-0ed5-4a52-9d37-a52ce07e4f50-logs\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.691659 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5634033-0ed5-4a52-9d37-a52ce07e4f50-scripts\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.692477 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5634033-0ed5-4a52-9d37-a52ce07e4f50-horizon-tls-certs\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.693141 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5634033-0ed5-4a52-9d37-a52ce07e4f50-config-data\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.694519 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e5634033-0ed5-4a52-9d37-a52ce07e4f50-horizon-secret-key\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.707729 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5634033-0ed5-4a52-9d37-a52ce07e4f50-combined-ca-bundle\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.711502 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7g7r\" (UniqueName: \"kubernetes.io/projected/e5634033-0ed5-4a52-9d37-a52ce07e4f50-kube-api-access-s7g7r\") pod \"horizon-7949b4656d-jjsj8\" (UID: \"e5634033-0ed5-4a52-9d37-a52ce07e4f50\") " pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.843259 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be3949da-bc32-48f7-8330-031cc2de23e4" path="/var/lib/kubelet/pods/be3949da-bc32-48f7-8330-031cc2de23e4/volumes" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.857301 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.896187 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mrd6z" event={"ID":"8fd67d9d-6ac0-496c-9726-ccb87a383a9a","Type":"ContainerStarted","Data":"44e744e7354094966911ff43826e1f22fc2d929d601bfc29686371979810cb41"} Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.896257 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mrd6z" event={"ID":"8fd67d9d-6ac0-496c-9726-ccb87a383a9a","Type":"ContainerStarted","Data":"60a060866c42b3779fcc4153d9d63e89f10812f1a080f330f21a73bf8d52fa55"} Nov 25 07:02:51 crc kubenswrapper[4482]: I1125 07:02:51.923498 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mrd6z" podStartSLOduration=1.9234848549999999 podStartE2EDuration="1.923484855s" podCreationTimestamp="2025-11-25 07:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:02:51.918343202 +0000 UTC m=+946.406574462" watchObservedRunningTime="2025-11-25 07:02:51.923484855 +0000 UTC m=+946.411716105" Nov 25 07:02:52 crc kubenswrapper[4482]: I1125 07:02:52.186838 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fbb9df54d-nfljm"] Nov 25 07:02:52 crc kubenswrapper[4482]: I1125 07:02:52.577019 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7949b4656d-jjsj8"] Nov 25 07:02:52 crc kubenswrapper[4482]: W1125 07:02:52.592462 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5634033_0ed5_4a52_9d37_a52ce07e4f50.slice/crio-7ae83134efd040d8059a0bf880239a42ec93887d2a9ded2dd34f64b137defc20 WatchSource:0}: Error finding container 7ae83134efd040d8059a0bf880239a42ec93887d2a9ded2dd34f64b137defc20: Status 404 returned error can't find the container with id 7ae83134efd040d8059a0bf880239a42ec93887d2a9ded2dd34f64b137defc20 Nov 25 07:02:52 crc kubenswrapper[4482]: I1125 07:02:52.904751 4482 generic.go:334] "Generic (PLEG): container finished" podID="573eba52-c038-42e0-89a7-4791962151a4" containerID="7caca70f49e5acd7a27569de6c3729ad30f554a367594838e2cb7e93f9f3dc80" exitCode=0 Nov 25 07:02:52 crc kubenswrapper[4482]: I1125 07:02:52.904820 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7b7rr" event={"ID":"573eba52-c038-42e0-89a7-4791962151a4","Type":"ContainerDied","Data":"7caca70f49e5acd7a27569de6c3729ad30f554a367594838e2cb7e93f9f3dc80"} Nov 25 07:02:52 crc kubenswrapper[4482]: I1125 07:02:52.908599 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fbb9df54d-nfljm" event={"ID":"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db","Type":"ContainerStarted","Data":"c5e4b61ec145d2e79cc4c53cef0936f15fef9b0980ae3d65522894c606220f22"} Nov 25 07:02:52 crc kubenswrapper[4482]: I1125 07:02:52.912277 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7949b4656d-jjsj8" event={"ID":"e5634033-0ed5-4a52-9d37-a52ce07e4f50","Type":"ContainerStarted","Data":"7ae83134efd040d8059a0bf880239a42ec93887d2a9ded2dd34f64b137defc20"} Nov 25 07:02:53 crc kubenswrapper[4482]: I1125 07:02:53.926109 4482 generic.go:334] "Generic (PLEG): container finished" podID="8fd67d9d-6ac0-496c-9726-ccb87a383a9a" containerID="44e744e7354094966911ff43826e1f22fc2d929d601bfc29686371979810cb41" exitCode=0 Nov 25 07:02:53 crc kubenswrapper[4482]: I1125 07:02:53.926215 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mrd6z" event={"ID":"8fd67d9d-6ac0-496c-9726-ccb87a383a9a","Type":"ContainerDied","Data":"44e744e7354094966911ff43826e1f22fc2d929d601bfc29686371979810cb41"} Nov 25 07:02:54 crc kubenswrapper[4482]: I1125 07:02:54.369013 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:02:54 crc kubenswrapper[4482]: I1125 07:02:54.435293 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9999f46dc-zwcqh"] Nov 25 07:02:54 crc kubenswrapper[4482]: I1125 07:02:54.435569 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" podUID="27015668-67ef-4c76-9a5d-d32a88a24c03" containerName="dnsmasq-dns" containerID="cri-o://d4b52585a05b742925cb717ed472952fd28ef09adaf986fedf1eb9ef552ca217" gracePeriod=10 Nov 25 07:02:54 crc kubenswrapper[4482]: I1125 07:02:54.984453 4482 generic.go:334] "Generic (PLEG): container finished" podID="27015668-67ef-4c76-9a5d-d32a88a24c03" containerID="d4b52585a05b742925cb717ed472952fd28ef09adaf986fedf1eb9ef552ca217" exitCode=0 Nov 25 07:02:54 crc kubenswrapper[4482]: I1125 07:02:54.985603 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" event={"ID":"27015668-67ef-4c76-9a5d-d32a88a24c03","Type":"ContainerDied","Data":"d4b52585a05b742925cb717ed472952fd28ef09adaf986fedf1eb9ef552ca217"} Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.073475 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.120042 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-fernet-keys\") pod \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.120085 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-combined-ca-bundle\") pod \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.120202 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4ds4\" (UniqueName: \"kubernetes.io/projected/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-kube-api-access-c4ds4\") pod \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.120247 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-scripts\") pod \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.120345 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-credential-keys\") pod \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.120379 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-config-data\") pod \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\" (UID: \"8fd67d9d-6ac0-496c-9726-ccb87a383a9a\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.125227 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8fd67d9d-6ac0-496c-9726-ccb87a383a9a" (UID: "8fd67d9d-6ac0-496c-9726-ccb87a383a9a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.129319 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8fd67d9d-6ac0-496c-9726-ccb87a383a9a" (UID: "8fd67d9d-6ac0-496c-9726-ccb87a383a9a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.131757 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-scripts" (OuterVolumeSpecName: "scripts") pod "8fd67d9d-6ac0-496c-9726-ccb87a383a9a" (UID: "8fd67d9d-6ac0-496c-9726-ccb87a383a9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.131913 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-kube-api-access-c4ds4" (OuterVolumeSpecName: "kube-api-access-c4ds4") pod "8fd67d9d-6ac0-496c-9726-ccb87a383a9a" (UID: "8fd67d9d-6ac0-496c-9726-ccb87a383a9a"). InnerVolumeSpecName "kube-api-access-c4ds4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.172242 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-config-data" (OuterVolumeSpecName: "config-data") pod "8fd67d9d-6ac0-496c-9726-ccb87a383a9a" (UID: "8fd67d9d-6ac0-496c-9726-ccb87a383a9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.179298 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fd67d9d-6ac0-496c-9726-ccb87a383a9a" (UID: "8fd67d9d-6ac0-496c-9726-ccb87a383a9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.188214 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.221983 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573eba52-c038-42e0-89a7-4791962151a4-combined-ca-bundle\") pod \"573eba52-c038-42e0-89a7-4791962151a4\" (UID: \"573eba52-c038-42e0-89a7-4791962151a4\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.222110 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx7qb\" (UniqueName: \"kubernetes.io/projected/573eba52-c038-42e0-89a7-4791962151a4-kube-api-access-vx7qb\") pod \"573eba52-c038-42e0-89a7-4791962151a4\" (UID: \"573eba52-c038-42e0-89a7-4791962151a4\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.222132 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/573eba52-c038-42e0-89a7-4791962151a4-config\") pod \"573eba52-c038-42e0-89a7-4791962151a4\" (UID: \"573eba52-c038-42e0-89a7-4791962151a4\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.222641 4482 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.222664 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.222673 4482 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.222684 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.222695 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4ds4\" (UniqueName: \"kubernetes.io/projected/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-kube-api-access-c4ds4\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.222707 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd67d9d-6ac0-496c-9726-ccb87a383a9a-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.227610 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573eba52-c038-42e0-89a7-4791962151a4-kube-api-access-vx7qb" (OuterVolumeSpecName: "kube-api-access-vx7qb") pod "573eba52-c038-42e0-89a7-4791962151a4" (UID: "573eba52-c038-42e0-89a7-4791962151a4"). InnerVolumeSpecName "kube-api-access-vx7qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.237961 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.274236 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573eba52-c038-42e0-89a7-4791962151a4-config" (OuterVolumeSpecName: "config") pod "573eba52-c038-42e0-89a7-4791962151a4" (UID: "573eba52-c038-42e0-89a7-4791962151a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.279501 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573eba52-c038-42e0-89a7-4791962151a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "573eba52-c038-42e0-89a7-4791962151a4" (UID: "573eba52-c038-42e0-89a7-4791962151a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.324840 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573eba52-c038-42e0-89a7-4791962151a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.324866 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx7qb\" (UniqueName: \"kubernetes.io/projected/573eba52-c038-42e0-89a7-4791962151a4-kube-api-access-vx7qb\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.324877 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/573eba52-c038-42e0-89a7-4791962151a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.425885 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbzhb\" (UniqueName: \"kubernetes.io/projected/27015668-67ef-4c76-9a5d-d32a88a24c03-kube-api-access-bbzhb\") pod \"27015668-67ef-4c76-9a5d-d32a88a24c03\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.426601 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-ovsdbserver-nb\") pod \"27015668-67ef-4c76-9a5d-d32a88a24c03\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.426895 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-ovsdbserver-sb\") pod \"27015668-67ef-4c76-9a5d-d32a88a24c03\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.427051 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-config\") pod \"27015668-67ef-4c76-9a5d-d32a88a24c03\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.427096 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-dns-svc\") pod \"27015668-67ef-4c76-9a5d-d32a88a24c03\" (UID: \"27015668-67ef-4c76-9a5d-d32a88a24c03\") " Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.433354 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27015668-67ef-4c76-9a5d-d32a88a24c03-kube-api-access-bbzhb" (OuterVolumeSpecName: "kube-api-access-bbzhb") pod "27015668-67ef-4c76-9a5d-d32a88a24c03" (UID: "27015668-67ef-4c76-9a5d-d32a88a24c03"). InnerVolumeSpecName "kube-api-access-bbzhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.461689 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27015668-67ef-4c76-9a5d-d32a88a24c03" (UID: "27015668-67ef-4c76-9a5d-d32a88a24c03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.461745 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-config" (OuterVolumeSpecName: "config") pod "27015668-67ef-4c76-9a5d-d32a88a24c03" (UID: "27015668-67ef-4c76-9a5d-d32a88a24c03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.465750 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27015668-67ef-4c76-9a5d-d32a88a24c03" (UID: "27015668-67ef-4c76-9a5d-d32a88a24c03"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.474376 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27015668-67ef-4c76-9a5d-d32a88a24c03" (UID: "27015668-67ef-4c76-9a5d-d32a88a24c03"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.529908 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbzhb\" (UniqueName: \"kubernetes.io/projected/27015668-67ef-4c76-9a5d-d32a88a24c03-kube-api-access-bbzhb\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.529940 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.529951 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.529963 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:56 crc kubenswrapper[4482]: I1125 07:02:56.529972 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27015668-67ef-4c76-9a5d-d32a88a24c03-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.007502 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nsg2v" event={"ID":"11533631-6479-4f8b-baaf-b1c71de4a966","Type":"ContainerStarted","Data":"fbf73235398a41b20075dd023a863d16de2c876a88c45c26e0f0249a327ebe45"} Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.011563 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mrd6z" event={"ID":"8fd67d9d-6ac0-496c-9726-ccb87a383a9a","Type":"ContainerDied","Data":"60a060866c42b3779fcc4153d9d63e89f10812f1a080f330f21a73bf8d52fa55"} Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.011613 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a060866c42b3779fcc4153d9d63e89f10812f1a080f330f21a73bf8d52fa55" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.011690 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mrd6z" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.025600 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" event={"ID":"27015668-67ef-4c76-9a5d-d32a88a24c03","Type":"ContainerDied","Data":"073d075e0b30bc0763c24d244b06ca65be71a5964375839b15e016d2905cb786"} Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.025663 4482 scope.go:117] "RemoveContainer" containerID="d4b52585a05b742925cb717ed472952fd28ef09adaf986fedf1eb9ef552ca217" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.026176 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9999f46dc-zwcqh" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.037193 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-nsg2v" podStartSLOduration=3.114546124 podStartE2EDuration="16.037181489s" podCreationTimestamp="2025-11-25 07:02:41 +0000 UTC" firstStartedPulling="2025-11-25 07:02:43.136119038 +0000 UTC m=+937.624350297" lastFinishedPulling="2025-11-25 07:02:56.058754403 +0000 UTC m=+950.546985662" observedRunningTime="2025-11-25 07:02:57.025376805 +0000 UTC m=+951.513608074" watchObservedRunningTime="2025-11-25 07:02:57.037181489 +0000 UTC m=+951.525412748" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.038043 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7b7rr" event={"ID":"573eba52-c038-42e0-89a7-4791962151a4","Type":"ContainerDied","Data":"1321eda47494c4c41ac5515c0f23a561991608a113ed67c36143c84110ca03ac"} Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.038071 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1321eda47494c4c41ac5515c0f23a561991608a113ed67c36143c84110ca03ac" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.038112 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7b7rr" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.088413 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9999f46dc-zwcqh"] Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.120968 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9999f46dc-zwcqh"] Nov 25 07:02:57 crc kubenswrapper[4482]: E1125 07:02:57.151929 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fd67d9d_6ac0_496c_9726_ccb87a383a9a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fd67d9d_6ac0_496c_9726_ccb87a383a9a.slice/crio-60a060866c42b3779fcc4153d9d63e89f10812f1a080f330f21a73bf8d52fa55\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27015668_67ef_4c76_9a5d_d32a88a24c03.slice\": RecentStats: unable to find data in memory cache]" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.192115 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5d96b8fb8d-vbp24"] Nov 25 07:02:57 crc kubenswrapper[4482]: E1125 07:02:57.192598 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27015668-67ef-4c76-9a5d-d32a88a24c03" containerName="init" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.192619 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="27015668-67ef-4c76-9a5d-d32a88a24c03" containerName="init" Nov 25 07:02:57 crc kubenswrapper[4482]: E1125 07:02:57.192639 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd67d9d-6ac0-496c-9726-ccb87a383a9a" containerName="keystone-bootstrap" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.192647 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd67d9d-6ac0-496c-9726-ccb87a383a9a" containerName="keystone-bootstrap" Nov 25 07:02:57 crc kubenswrapper[4482]: E1125 07:02:57.192669 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573eba52-c038-42e0-89a7-4791962151a4" containerName="neutron-db-sync" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.192674 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="573eba52-c038-42e0-89a7-4791962151a4" containerName="neutron-db-sync" Nov 25 07:02:57 crc kubenswrapper[4482]: E1125 07:02:57.192689 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27015668-67ef-4c76-9a5d-d32a88a24c03" containerName="dnsmasq-dns" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.192697 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="27015668-67ef-4c76-9a5d-d32a88a24c03" containerName="dnsmasq-dns" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.193101 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd67d9d-6ac0-496c-9726-ccb87a383a9a" containerName="keystone-bootstrap" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.193132 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="573eba52-c038-42e0-89a7-4791962151a4" containerName="neutron-db-sync" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.193153 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="27015668-67ef-4c76-9a5d-d32a88a24c03" containerName="dnsmasq-dns" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.193897 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.199095 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.199304 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.199330 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.199427 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.199618 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nl4pz" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.199781 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.206640 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5d96b8fb8d-vbp24"] Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.243785 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvxbp\" (UniqueName: \"kubernetes.io/projected/07c0f203-cb74-47b4-a3f2-b5038d51e914-kube-api-access-kvxbp\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.243904 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-public-tls-certs\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.243956 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-credential-keys\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.244026 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-scripts\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.244110 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-combined-ca-bundle\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.244223 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-fernet-keys\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.244325 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-internal-tls-certs\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.244408 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-config-data\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.345767 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-internal-tls-certs\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.346046 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-config-data\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.346164 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvxbp\" (UniqueName: \"kubernetes.io/projected/07c0f203-cb74-47b4-a3f2-b5038d51e914-kube-api-access-kvxbp\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.346286 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-public-tls-certs\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.346360 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-credential-keys\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.346432 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-scripts\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.346502 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-combined-ca-bundle\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.346606 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-fernet-keys\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.358855 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-public-tls-certs\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.360641 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-credential-keys\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.361156 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-config-data\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.363843 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-combined-ca-bundle\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.364432 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-internal-tls-certs\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.374031 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64465d5ccf-x2spj"] Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.375480 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.376357 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-scripts\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.398223 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64465d5ccf-x2spj"] Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.402253 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvxbp\" (UniqueName: \"kubernetes.io/projected/07c0f203-cb74-47b4-a3f2-b5038d51e914-kube-api-access-kvxbp\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.402669 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07c0f203-cb74-47b4-a3f2-b5038d51e914-fernet-keys\") pod \"keystone-5d96b8fb8d-vbp24\" (UID: \"07c0f203-cb74-47b4-a3f2-b5038d51e914\") " pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.447760 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-dns-svc\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.449797 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-dns-swift-storage-0\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.449952 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swdbp\" (UniqueName: \"kubernetes.io/projected/bad0d471-5ace-4679-ab4c-e8d8a1d64462-kube-api-access-swdbp\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.450113 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-ovsdbserver-nb\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.450321 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-config\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.450446 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-ovsdbserver-sb\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.498651 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7954648f5b-fkx6n"] Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.500316 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.505977 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.506086 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.508511 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kdhzt" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.508702 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.516116 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.530252 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7954648f5b-fkx6n"] Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.556706 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfg82\" (UniqueName: \"kubernetes.io/projected/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-kube-api-access-mfg82\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.556878 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-ovsdbserver-nb\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.556957 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-ovndb-tls-certs\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.557046 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-config\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.557123 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-combined-ca-bundle\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.557218 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-config\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.557439 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-ovsdbserver-sb\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.557511 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-httpd-config\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.557600 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-dns-svc\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.557663 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-dns-swift-storage-0\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.557736 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swdbp\" (UniqueName: \"kubernetes.io/projected/bad0d471-5ace-4679-ab4c-e8d8a1d64462-kube-api-access-swdbp\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.558839 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-ovsdbserver-nb\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.559532 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-config\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.560090 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-ovsdbserver-sb\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.560650 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-dns-svc\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.561186 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-dns-swift-storage-0\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.575350 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swdbp\" (UniqueName: \"kubernetes.io/projected/bad0d471-5ace-4679-ab4c-e8d8a1d64462-kube-api-access-swdbp\") pod \"dnsmasq-dns-64465d5ccf-x2spj\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.659127 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfg82\" (UniqueName: \"kubernetes.io/projected/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-kube-api-access-mfg82\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.659281 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-ovndb-tls-certs\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.659320 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-config\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.659354 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-combined-ca-bundle\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.659403 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-httpd-config\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.663296 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-httpd-config\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.676460 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-config\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.677263 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-ovndb-tls-certs\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.683480 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-combined-ca-bundle\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.688276 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfg82\" (UniqueName: \"kubernetes.io/projected/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-kube-api-access-mfg82\") pod \"neutron-7954648f5b-fkx6n\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.758748 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.823247 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:02:57 crc kubenswrapper[4482]: I1125 07:02:57.841011 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27015668-67ef-4c76-9a5d-d32a88a24c03" path="/var/lib/kubelet/pods/27015668-67ef-4c76-9a5d-d32a88a24c03/volumes" Nov 25 07:02:58 crc kubenswrapper[4482]: I1125 07:02:58.047786 4482 generic.go:334] "Generic (PLEG): container finished" podID="11533631-6479-4f8b-baaf-b1c71de4a966" containerID="fbf73235398a41b20075dd023a863d16de2c876a88c45c26e0f0249a327ebe45" exitCode=0 Nov 25 07:02:58 crc kubenswrapper[4482]: I1125 07:02:58.047801 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nsg2v" event={"ID":"11533631-6479-4f8b-baaf-b1c71de4a966","Type":"ContainerDied","Data":"fbf73235398a41b20075dd023a863d16de2c876a88c45c26e0f0249a327ebe45"} Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.423770 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-656dff569f-qv7tq"] Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.425988 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.428298 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.428635 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.496748 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-httpd-config\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.496818 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz5ww\" (UniqueName: \"kubernetes.io/projected/22d88363-431c-4a28-818e-f200d37d64b5-kube-api-access-lz5ww\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.496884 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-config\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.496909 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-internal-tls-certs\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.496925 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-ovndb-tls-certs\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.497011 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-combined-ca-bundle\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.497045 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-public-tls-certs\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.497697 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-656dff569f-qv7tq"] Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.598867 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-combined-ca-bundle\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.599023 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-public-tls-certs\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.599109 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-httpd-config\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.599187 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz5ww\" (UniqueName: \"kubernetes.io/projected/22d88363-431c-4a28-818e-f200d37d64b5-kube-api-access-lz5ww\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.599291 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-config\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.599327 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-internal-tls-certs\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.599345 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-ovndb-tls-certs\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.607032 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-combined-ca-bundle\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.607040 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-public-tls-certs\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.608014 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-ovndb-tls-certs\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.609935 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-internal-tls-certs\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.610624 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-config\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.612797 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-httpd-config\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.615268 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz5ww\" (UniqueName: \"kubernetes.io/projected/22d88363-431c-4a28-818e-f200d37d64b5-kube-api-access-lz5ww\") pod \"neutron-656dff569f-qv7tq\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:02:59 crc kubenswrapper[4482]: I1125 07:02:59.754762 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.566896 4482 scope.go:117] "RemoveContainer" containerID="f820cf0f020faef74b1f20c4370b76aa44f61dc719772be371391bc952abeae4" Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.591362 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nsg2v" Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.730941 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-scripts\") pod \"11533631-6479-4f8b-baaf-b1c71de4a966\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.731034 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-combined-ca-bundle\") pod \"11533631-6479-4f8b-baaf-b1c71de4a966\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.731094 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b27rn\" (UniqueName: \"kubernetes.io/projected/11533631-6479-4f8b-baaf-b1c71de4a966-kube-api-access-b27rn\") pod \"11533631-6479-4f8b-baaf-b1c71de4a966\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.731163 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11533631-6479-4f8b-baaf-b1c71de4a966-logs\") pod \"11533631-6479-4f8b-baaf-b1c71de4a966\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.731221 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-config-data\") pod \"11533631-6479-4f8b-baaf-b1c71de4a966\" (UID: \"11533631-6479-4f8b-baaf-b1c71de4a966\") " Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.741148 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11533631-6479-4f8b-baaf-b1c71de4a966-logs" (OuterVolumeSpecName: "logs") pod "11533631-6479-4f8b-baaf-b1c71de4a966" (UID: "11533631-6479-4f8b-baaf-b1c71de4a966"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.758097 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11533631-6479-4f8b-baaf-b1c71de4a966-kube-api-access-b27rn" (OuterVolumeSpecName: "kube-api-access-b27rn") pod "11533631-6479-4f8b-baaf-b1c71de4a966" (UID: "11533631-6479-4f8b-baaf-b1c71de4a966"). InnerVolumeSpecName "kube-api-access-b27rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.792024 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-scripts" (OuterVolumeSpecName: "scripts") pod "11533631-6479-4f8b-baaf-b1c71de4a966" (UID: "11533631-6479-4f8b-baaf-b1c71de4a966"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.797922 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11533631-6479-4f8b-baaf-b1c71de4a966" (UID: "11533631-6479-4f8b-baaf-b1c71de4a966"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.799229 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-config-data" (OuterVolumeSpecName: "config-data") pod "11533631-6479-4f8b-baaf-b1c71de4a966" (UID: "11533631-6479-4f8b-baaf-b1c71de4a966"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.836078 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b27rn\" (UniqueName: \"kubernetes.io/projected/11533631-6479-4f8b-baaf-b1c71de4a966-kube-api-access-b27rn\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.836123 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11533631-6479-4f8b-baaf-b1c71de4a966-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.836136 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.836151 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:00 crc kubenswrapper[4482]: I1125 07:03:00.837688 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11533631-6479-4f8b-baaf-b1c71de4a966-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.102376 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nsg2v" event={"ID":"11533631-6479-4f8b-baaf-b1c71de4a966","Type":"ContainerDied","Data":"a18d1caca218408bf7e96770a225bea12c87c148661ee91deeefbbb8c5199b00"} Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.102760 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a18d1caca218408bf7e96770a225bea12c87c148661ee91deeefbbb8c5199b00" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.102627 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nsg2v" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.118107 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qm4lm" event={"ID":"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f","Type":"ContainerStarted","Data":"22df4d6578c18583c058a7a90fcceb72256ebc798a36408a01ff1c222e2d44ae"} Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.143470 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qm4lm" podStartSLOduration=2.339807711 podStartE2EDuration="20.14344704s" podCreationTimestamp="2025-11-25 07:02:41 +0000 UTC" firstStartedPulling="2025-11-25 07:02:42.819571371 +0000 UTC m=+937.307802619" lastFinishedPulling="2025-11-25 07:03:00.623210689 +0000 UTC m=+955.111441948" observedRunningTime="2025-11-25 07:03:01.140333661 +0000 UTC m=+955.628564920" watchObservedRunningTime="2025-11-25 07:03:01.14344704 +0000 UTC m=+955.631678288" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.182110 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64465d5ccf-x2spj"] Nov 25 07:03:01 crc kubenswrapper[4482]: W1125 07:03:01.187249 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbad0d471_5ace_4679_ab4c_e8d8a1d64462.slice/crio-bbf873c7e6c162d4308eb89dccf1e2c999850f3c7c12bd5851a3ee282cf35cef WatchSource:0}: Error finding container bbf873c7e6c162d4308eb89dccf1e2c999850f3c7c12bd5851a3ee282cf35cef: Status 404 returned error can't find the container with id bbf873c7e6c162d4308eb89dccf1e2c999850f3c7c12bd5851a3ee282cf35cef Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.422583 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5d96b8fb8d-vbp24"] Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.454654 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-656dff569f-qv7tq"] Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.734667 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-766dfbbcb6-85kbc"] Nov 25 07:03:01 crc kubenswrapper[4482]: E1125 07:03:01.735063 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11533631-6479-4f8b-baaf-b1c71de4a966" containerName="placement-db-sync" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.735079 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="11533631-6479-4f8b-baaf-b1c71de4a966" containerName="placement-db-sync" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.735261 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="11533631-6479-4f8b-baaf-b1c71de4a966" containerName="placement-db-sync" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.736151 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.739278 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.739310 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.739544 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.739759 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.739916 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kc4sk" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.746352 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-766dfbbcb6-85kbc"] Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.767581 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56d81a5-adcd-4323-92c9-d294af1e6cd3-logs\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.767635 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-config-data\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.767657 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q72z\" (UniqueName: \"kubernetes.io/projected/c56d81a5-adcd-4323-92c9-d294af1e6cd3-kube-api-access-2q72z\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.767677 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-scripts\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.767742 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-internal-tls-certs\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.767768 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-combined-ca-bundle\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.767797 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-public-tls-certs\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.869715 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-internal-tls-certs\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.869891 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-combined-ca-bundle\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.870043 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-public-tls-certs\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.870204 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56d81a5-adcd-4323-92c9-d294af1e6cd3-logs\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.870341 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-config-data\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.870678 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56d81a5-adcd-4323-92c9-d294af1e6cd3-logs\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.871192 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q72z\" (UniqueName: \"kubernetes.io/projected/c56d81a5-adcd-4323-92c9-d294af1e6cd3-kube-api-access-2q72z\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.871302 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-scripts\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.877621 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-public-tls-certs\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.879903 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-combined-ca-bundle\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.881318 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-config-data\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.886760 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-scripts\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.891013 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q72z\" (UniqueName: \"kubernetes.io/projected/c56d81a5-adcd-4323-92c9-d294af1e6cd3-kube-api-access-2q72z\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:01 crc kubenswrapper[4482]: I1125 07:03:01.897766 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56d81a5-adcd-4323-92c9-d294af1e6cd3-internal-tls-certs\") pod \"placement-766dfbbcb6-85kbc\" (UID: \"c56d81a5-adcd-4323-92c9-d294af1e6cd3\") " pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.083310 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.137223 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d96b8fb8d-vbp24" event={"ID":"07c0f203-cb74-47b4-a3f2-b5038d51e914","Type":"ContainerStarted","Data":"7745f335656fbc566748e1e3f8929c2086ace91ec13796756d02557296f58b0a"} Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.137281 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d96b8fb8d-vbp24" event={"ID":"07c0f203-cb74-47b4-a3f2-b5038d51e914","Type":"ContainerStarted","Data":"2c796cab4669bb683d0116bd39d865918b029510b5d9a294bbe02f460cea930a"} Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.137388 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.141849 4482 generic.go:334] "Generic (PLEG): container finished" podID="bad0d471-5ace-4679-ab4c-e8d8a1d64462" containerID="5fd5e027f4eb47d3c4285d77e8a57e26612dcfb287ede57b9eed8f85d41224f3" exitCode=0 Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.141920 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" event={"ID":"bad0d471-5ace-4679-ab4c-e8d8a1d64462","Type":"ContainerDied","Data":"5fd5e027f4eb47d3c4285d77e8a57e26612dcfb287ede57b9eed8f85d41224f3"} Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.141951 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" event={"ID":"bad0d471-5ace-4679-ab4c-e8d8a1d64462","Type":"ContainerStarted","Data":"bbf873c7e6c162d4308eb89dccf1e2c999850f3c7c12bd5851a3ee282cf35cef"} Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.151994 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-656dff569f-qv7tq" event={"ID":"22d88363-431c-4a28-818e-f200d37d64b5","Type":"ContainerStarted","Data":"fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a"} Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.152041 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.152056 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-656dff569f-qv7tq" event={"ID":"22d88363-431c-4a28-818e-f200d37d64b5","Type":"ContainerStarted","Data":"ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338"} Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.152064 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-656dff569f-qv7tq" event={"ID":"22d88363-431c-4a28-818e-f200d37d64b5","Type":"ContainerStarted","Data":"00392a0ad2bf729aac1e206b36eba0d86dfc42fe3fce74b8bc9caf4102ebf78a"} Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.170596 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5d96b8fb8d-vbp24" podStartSLOduration=5.170548975 podStartE2EDuration="5.170548975s" podCreationTimestamp="2025-11-25 07:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:03:02.160784316 +0000 UTC m=+956.649015576" watchObservedRunningTime="2025-11-25 07:03:02.170548975 +0000 UTC m=+956.658780234" Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.246315 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-656dff569f-qv7tq" podStartSLOduration=3.246297662 podStartE2EDuration="3.246297662s" podCreationTimestamp="2025-11-25 07:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:03:02.234675693 +0000 UTC m=+956.722906942" watchObservedRunningTime="2025-11-25 07:03:02.246297662 +0000 UTC m=+956.734528921" Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.481397 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7954648f5b-fkx6n"] Nov 25 07:03:02 crc kubenswrapper[4482]: I1125 07:03:02.712664 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-766dfbbcb6-85kbc"] Nov 25 07:03:02 crc kubenswrapper[4482]: W1125 07:03:02.721925 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc56d81a5_adcd_4323_92c9_d294af1e6cd3.slice/crio-becd0dc303d23b7a473a4112622982217381d4e4eca9eca54d6d76a45afdae71 WatchSource:0}: Error finding container becd0dc303d23b7a473a4112622982217381d4e4eca9eca54d6d76a45afdae71: Status 404 returned error can't find the container with id becd0dc303d23b7a473a4112622982217381d4e4eca9eca54d6d76a45afdae71 Nov 25 07:03:03 crc kubenswrapper[4482]: I1125 07:03:03.164944 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-766dfbbcb6-85kbc" event={"ID":"c56d81a5-adcd-4323-92c9-d294af1e6cd3","Type":"ContainerStarted","Data":"becd0dc303d23b7a473a4112622982217381d4e4eca9eca54d6d76a45afdae71"} Nov 25 07:03:03 crc kubenswrapper[4482]: I1125 07:03:03.166859 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7954648f5b-fkx6n" event={"ID":"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0","Type":"ContainerStarted","Data":"aefe2f69641d703adc6819c0cdba4b2686ef574533606eabe12f4f8345fe9229"} Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.191877 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7954648f5b-fkx6n" event={"ID":"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0","Type":"ContainerStarted","Data":"4b4d254252b63fc75295f16e2baa703a7e8aa76b21e18ddacbfd21e58cc389b7"} Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.191950 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7954648f5b-fkx6n" event={"ID":"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0","Type":"ContainerStarted","Data":"d319bd6243db1ab6315e7d46fc566168b5ec6feabb196f82187025ad9cd4cc34"} Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.192015 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.195983 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-766dfbbcb6-85kbc" event={"ID":"c56d81a5-adcd-4323-92c9-d294af1e6cd3","Type":"ContainerStarted","Data":"44cf06f55a13fdf50ac7787532b6debbd750a2d91defb64f80288a18c2b56b36"} Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.196026 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-766dfbbcb6-85kbc" event={"ID":"c56d81a5-adcd-4323-92c9-d294af1e6cd3","Type":"ContainerStarted","Data":"aa7d81f2aaa560b195c4dacf7588b93cd6b6e39ec6ec752171e2504e4adb181d"} Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.196529 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.196572 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.199781 4482 generic.go:334] "Generic (PLEG): container finished" podID="1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f" containerID="22df4d6578c18583c058a7a90fcceb72256ebc798a36408a01ff1c222e2d44ae" exitCode=0 Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.199884 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qm4lm" event={"ID":"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f","Type":"ContainerDied","Data":"22df4d6578c18583c058a7a90fcceb72256ebc798a36408a01ff1c222e2d44ae"} Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.202105 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" event={"ID":"bad0d471-5ace-4679-ab4c-e8d8a1d64462","Type":"ContainerStarted","Data":"851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7"} Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.202546 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.229289 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7954648f5b-fkx6n" podStartSLOduration=7.229278436 podStartE2EDuration="7.229278436s" podCreationTimestamp="2025-11-25 07:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:03:04.228355596 +0000 UTC m=+958.716586855" watchObservedRunningTime="2025-11-25 07:03:04.229278436 +0000 UTC m=+958.717509695" Nov 25 07:03:04 crc kubenswrapper[4482]: I1125 07:03:04.251164 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" podStartSLOduration=7.251153252 podStartE2EDuration="7.251153252s" podCreationTimestamp="2025-11-25 07:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:03:04.244346842 +0000 UTC m=+958.732578101" watchObservedRunningTime="2025-11-25 07:03:04.251153252 +0000 UTC m=+958.739384512" Nov 25 07:03:05 crc kubenswrapper[4482]: I1125 07:03:05.569592 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:03:05 crc kubenswrapper[4482]: I1125 07:03:05.589621 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-766dfbbcb6-85kbc" podStartSLOduration=4.58960124 podStartE2EDuration="4.58960124s" podCreationTimestamp="2025-11-25 07:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:03:04.283751615 +0000 UTC m=+958.771982874" watchObservedRunningTime="2025-11-25 07:03:05.58960124 +0000 UTC m=+960.077832499" Nov 25 07:03:05 crc kubenswrapper[4482]: I1125 07:03:05.673750 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-combined-ca-bundle\") pod \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\" (UID: \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\") " Nov 25 07:03:05 crc kubenswrapper[4482]: I1125 07:03:05.673901 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t57v\" (UniqueName: \"kubernetes.io/projected/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-kube-api-access-6t57v\") pod \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\" (UID: \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\") " Nov 25 07:03:05 crc kubenswrapper[4482]: I1125 07:03:05.673929 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-db-sync-config-data\") pod \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\" (UID: \"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f\") " Nov 25 07:03:05 crc kubenswrapper[4482]: I1125 07:03:05.682280 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f" (UID: "1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:05 crc kubenswrapper[4482]: I1125 07:03:05.682468 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-kube-api-access-6t57v" (OuterVolumeSpecName: "kube-api-access-6t57v") pod "1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f" (UID: "1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f"). InnerVolumeSpecName "kube-api-access-6t57v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:05 crc kubenswrapper[4482]: I1125 07:03:05.697330 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f" (UID: "1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:05 crc kubenswrapper[4482]: I1125 07:03:05.776920 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:05 crc kubenswrapper[4482]: I1125 07:03:05.776967 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t57v\" (UniqueName: \"kubernetes.io/projected/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-kube-api-access-6t57v\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:05 crc kubenswrapper[4482]: I1125 07:03:05.776981 4482 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.221397 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qm4lm" event={"ID":"1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f","Type":"ContainerDied","Data":"f932cee4fa0b02d63a97e12a7b7baf7c3f6509094614d6f7deb7ce9f2808b31d"} Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.221740 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f932cee4fa0b02d63a97e12a7b7baf7c3f6509094614d6f7deb7ce9f2808b31d" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.221485 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qm4lm" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.491087 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-76d7bb49f8-pmvwn"] Nov 25 07:03:06 crc kubenswrapper[4482]: E1125 07:03:06.494584 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f" containerName="barbican-db-sync" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.494608 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f" containerName="barbican-db-sync" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.494831 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f" containerName="barbican-db-sync" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.495888 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.501856 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7mvql" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.502152 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.502336 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.508470 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7546474697-whwwz"] Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.509733 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.519826 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.537223 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-76d7bb49f8-pmvwn"] Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.543605 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7546474697-whwwz"] Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.608606 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrskd\" (UniqueName: \"kubernetes.io/projected/9e253dad-2e3d-429d-a925-32520246a162-kube-api-access-nrskd\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.608789 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcd265f5-a03a-4b85-a287-e76a93ce3310-logs\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.608857 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e253dad-2e3d-429d-a925-32520246a162-combined-ca-bundle\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.608915 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e253dad-2e3d-429d-a925-32520246a162-config-data\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.608935 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e253dad-2e3d-429d-a925-32520246a162-logs\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.609123 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e253dad-2e3d-429d-a925-32520246a162-config-data-custom\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.609218 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcd265f5-a03a-4b85-a287-e76a93ce3310-config-data\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.609252 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcd265f5-a03a-4b85-a287-e76a93ce3310-config-data-custom\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.609331 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcd265f5-a03a-4b85-a287-e76a93ce3310-combined-ca-bundle\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.609356 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvk9j\" (UniqueName: \"kubernetes.io/projected/dcd265f5-a03a-4b85-a287-e76a93ce3310-kube-api-access-wvk9j\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.637067 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64465d5ccf-x2spj"] Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.671699 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58d8d55fc5-62wcn"] Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.673885 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.705766 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58d8d55fc5-62wcn"] Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.714514 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e253dad-2e3d-429d-a925-32520246a162-config-data-custom\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.714554 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcd265f5-a03a-4b85-a287-e76a93ce3310-config-data\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.714582 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcd265f5-a03a-4b85-a287-e76a93ce3310-config-data-custom\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.714620 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcd265f5-a03a-4b85-a287-e76a93ce3310-combined-ca-bundle\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.714638 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvk9j\" (UniqueName: \"kubernetes.io/projected/dcd265f5-a03a-4b85-a287-e76a93ce3310-kube-api-access-wvk9j\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.714690 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrskd\" (UniqueName: \"kubernetes.io/projected/9e253dad-2e3d-429d-a925-32520246a162-kube-api-access-nrskd\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.714728 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcd265f5-a03a-4b85-a287-e76a93ce3310-logs\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.714752 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e253dad-2e3d-429d-a925-32520246a162-combined-ca-bundle\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.714776 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e253dad-2e3d-429d-a925-32520246a162-config-data\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.714791 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e253dad-2e3d-429d-a925-32520246a162-logs\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.715141 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e253dad-2e3d-429d-a925-32520246a162-logs\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.718581 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcd265f5-a03a-4b85-a287-e76a93ce3310-logs\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.731762 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e253dad-2e3d-429d-a925-32520246a162-config-data\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.737521 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e253dad-2e3d-429d-a925-32520246a162-config-data-custom\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.739688 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcd265f5-a03a-4b85-a287-e76a93ce3310-config-data\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.747648 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvk9j\" (UniqueName: \"kubernetes.io/projected/dcd265f5-a03a-4b85-a287-e76a93ce3310-kube-api-access-wvk9j\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.754213 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dcd265f5-a03a-4b85-a287-e76a93ce3310-config-data-custom\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.754601 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrskd\" (UniqueName: \"kubernetes.io/projected/9e253dad-2e3d-429d-a925-32520246a162-kube-api-access-nrskd\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.757819 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e253dad-2e3d-429d-a925-32520246a162-combined-ca-bundle\") pod \"barbican-keystone-listener-76d7bb49f8-pmvwn\" (UID: \"9e253dad-2e3d-429d-a925-32520246a162\") " pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.758898 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcd265f5-a03a-4b85-a287-e76a93ce3310-combined-ca-bundle\") pod \"barbican-worker-7546474697-whwwz\" (UID: \"dcd265f5-a03a-4b85-a287-e76a93ce3310\") " pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.774375 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d5bdfb6-rcrpb"] Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.777125 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.785481 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.792789 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d5bdfb6-rcrpb"] Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.817348 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-config\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.817402 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-ovsdbserver-sb\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.817485 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlbks\" (UniqueName: \"kubernetes.io/projected/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-kube-api-access-jlbks\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.817645 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-dns-svc\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.817677 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-dns-swift-storage-0\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.817743 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-ovsdbserver-nb\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.822573 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.846525 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7546474697-whwwz" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.919390 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-combined-ca-bundle\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.919487 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-config\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.919513 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-logs\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.920607 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-config\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.920666 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-ovsdbserver-sb\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.920725 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlbks\" (UniqueName: \"kubernetes.io/projected/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-kube-api-access-jlbks\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.920752 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mccj\" (UniqueName: \"kubernetes.io/projected/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-kube-api-access-6mccj\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.920784 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-config-data\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.920869 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-dns-svc\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.920889 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-config-data-custom\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.920922 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-dns-swift-storage-0\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.920970 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-ovsdbserver-nb\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.922311 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-ovsdbserver-sb\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.923796 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-dns-svc\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.924637 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-ovsdbserver-nb\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.928497 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-dns-swift-storage-0\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:06 crc kubenswrapper[4482]: I1125 07:03:06.954308 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlbks\" (UniqueName: \"kubernetes.io/projected/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-kube-api-access-jlbks\") pod \"dnsmasq-dns-58d8d55fc5-62wcn\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.003705 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.023668 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-config-data-custom\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.023772 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-combined-ca-bundle\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.023808 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-logs\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.023872 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mccj\" (UniqueName: \"kubernetes.io/projected/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-kube-api-access-6mccj\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.023918 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-config-data\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.024290 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-logs\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.028882 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-config-data-custom\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.029330 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-config-data\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.031278 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-combined-ca-bundle\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.040019 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mccj\" (UniqueName: \"kubernetes.io/projected/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-kube-api-access-6mccj\") pod \"barbican-api-5d5bdfb6-rcrpb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.130476 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.241626 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" podUID="bad0d471-5ace-4679-ab4c-e8d8a1d64462" containerName="dnsmasq-dns" containerID="cri-o://851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7" gracePeriod=10 Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.307061 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-76d7bb49f8-pmvwn"] Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.387034 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7546474697-whwwz"] Nov 25 07:03:07 crc kubenswrapper[4482]: W1125 07:03:07.394073 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcd265f5_a03a_4b85_a287_e76a93ce3310.slice/crio-6f172de25e7c2b8dd93c30df8e9f6b0be483b0f93c241b57de2956804f801853 WatchSource:0}: Error finding container 6f172de25e7c2b8dd93c30df8e9f6b0be483b0f93c241b57de2956804f801853: Status 404 returned error can't find the container with id 6f172de25e7c2b8dd93c30df8e9f6b0be483b0f93c241b57de2956804f801853 Nov 25 07:03:07 crc kubenswrapper[4482]: E1125 07:03:07.494366 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbad0d471_5ace_4679_ab4c_e8d8a1d64462.slice/crio-conmon-851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbad0d471_5ace_4679_ab4c_e8d8a1d64462.slice/crio-851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7.scope\": RecentStats: unable to find data in memory cache]" Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.549394 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58d8d55fc5-62wcn"] Nov 25 07:03:07 crc kubenswrapper[4482]: W1125 07:03:07.559274 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22ad88d8_cb8a_4137_b2d6_f8e787a1526b.slice/crio-709ca5739c0d46ea26a93c7a09a3403583a095d08d3650d31d8572d24427c714 WatchSource:0}: Error finding container 709ca5739c0d46ea26a93c7a09a3403583a095d08d3650d31d8572d24427c714: Status 404 returned error can't find the container with id 709ca5739c0d46ea26a93c7a09a3403583a095d08d3650d31d8572d24427c714 Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.707271 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d5bdfb6-rcrpb"] Nov 25 07:03:07 crc kubenswrapper[4482]: W1125 07:03:07.729926 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0bc93ea_483e_4c8f_8985_eab0e30f44cb.slice/crio-9f762b74981288ce6af2cc5260534445319066c6028c69e67110c4fbe9dffdb7 WatchSource:0}: Error finding container 9f762b74981288ce6af2cc5260534445319066c6028c69e67110c4fbe9dffdb7: Status 404 returned error can't find the container with id 9f762b74981288ce6af2cc5260534445319066c6028c69e67110c4fbe9dffdb7 Nov 25 07:03:07 crc kubenswrapper[4482]: I1125 07:03:07.879974 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.054947 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-dns-svc\") pod \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.055298 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swdbp\" (UniqueName: \"kubernetes.io/projected/bad0d471-5ace-4679-ab4c-e8d8a1d64462-kube-api-access-swdbp\") pod \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.055443 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-ovsdbserver-nb\") pod \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.055554 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-config\") pod \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.055620 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-ovsdbserver-sb\") pod \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.055675 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-dns-swift-storage-0\") pod \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\" (UID: \"bad0d471-5ace-4679-ab4c-e8d8a1d64462\") " Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.065903 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad0d471-5ace-4679-ab4c-e8d8a1d64462-kube-api-access-swdbp" (OuterVolumeSpecName: "kube-api-access-swdbp") pod "bad0d471-5ace-4679-ab4c-e8d8a1d64462" (UID: "bad0d471-5ace-4679-ab4c-e8d8a1d64462"). InnerVolumeSpecName "kube-api-access-swdbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.097259 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bad0d471-5ace-4679-ab4c-e8d8a1d64462" (UID: "bad0d471-5ace-4679-ab4c-e8d8a1d64462"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.101364 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bad0d471-5ace-4679-ab4c-e8d8a1d64462" (UID: "bad0d471-5ace-4679-ab4c-e8d8a1d64462"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.104765 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bad0d471-5ace-4679-ab4c-e8d8a1d64462" (UID: "bad0d471-5ace-4679-ab4c-e8d8a1d64462"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.135374 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bad0d471-5ace-4679-ab4c-e8d8a1d64462" (UID: "bad0d471-5ace-4679-ab4c-e8d8a1d64462"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.136760 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-config" (OuterVolumeSpecName: "config") pod "bad0d471-5ace-4679-ab4c-e8d8a1d64462" (UID: "bad0d471-5ace-4679-ab4c-e8d8a1d64462"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.158381 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swdbp\" (UniqueName: \"kubernetes.io/projected/bad0d471-5ace-4679-ab4c-e8d8a1d64462-kube-api-access-swdbp\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.158408 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.158417 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.158426 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.158434 4482 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.158442 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bad0d471-5ace-4679-ab4c-e8d8a1d64462-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.254456 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5bdfb6-rcrpb" event={"ID":"c0bc93ea-483e-4c8f-8985-eab0e30f44cb","Type":"ContainerStarted","Data":"449faa6e58542e25505d87a3d11fea18084fce07b7ad5619c38376ef18de1515"} Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.254514 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5bdfb6-rcrpb" event={"ID":"c0bc93ea-483e-4c8f-8985-eab0e30f44cb","Type":"ContainerStarted","Data":"1e7e4d192d5debf188e1f6b30d2cfb1d12e35abb739800240e449c14c6bc622a"} Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.254527 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5bdfb6-rcrpb" event={"ID":"c0bc93ea-483e-4c8f-8985-eab0e30f44cb","Type":"ContainerStarted","Data":"9f762b74981288ce6af2cc5260534445319066c6028c69e67110c4fbe9dffdb7"} Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.255745 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.255800 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.262640 4482 generic.go:334] "Generic (PLEG): container finished" podID="bad0d471-5ace-4679-ab4c-e8d8a1d64462" containerID="851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7" exitCode=0 Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.262694 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.262728 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" event={"ID":"bad0d471-5ace-4679-ab4c-e8d8a1d64462","Type":"ContainerDied","Data":"851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7"} Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.262767 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64465d5ccf-x2spj" event={"ID":"bad0d471-5ace-4679-ab4c-e8d8a1d64462","Type":"ContainerDied","Data":"bbf873c7e6c162d4308eb89dccf1e2c999850f3c7c12bd5851a3ee282cf35cef"} Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.262793 4482 scope.go:117] "RemoveContainer" containerID="851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.264162 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" event={"ID":"9e253dad-2e3d-429d-a925-32520246a162","Type":"ContainerStarted","Data":"72b4d9f3f32992508e8023bafb4859801f037a8ed0320eca840c71d534e5c6cd"} Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.266464 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7546474697-whwwz" event={"ID":"dcd265f5-a03a-4b85-a287-e76a93ce3310","Type":"ContainerStarted","Data":"6f172de25e7c2b8dd93c30df8e9f6b0be483b0f93c241b57de2956804f801853"} Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.268402 4482 generic.go:334] "Generic (PLEG): container finished" podID="22ad88d8-cb8a-4137-b2d6-f8e787a1526b" containerID="f076f517b51ebb8e0ad7d429ff2f80da111c21197a57cdf0d5e15e6205b71841" exitCode=0 Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.268446 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" event={"ID":"22ad88d8-cb8a-4137-b2d6-f8e787a1526b","Type":"ContainerDied","Data":"f076f517b51ebb8e0ad7d429ff2f80da111c21197a57cdf0d5e15e6205b71841"} Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.268466 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" event={"ID":"22ad88d8-cb8a-4137-b2d6-f8e787a1526b","Type":"ContainerStarted","Data":"709ca5739c0d46ea26a93c7a09a3403583a095d08d3650d31d8572d24427c714"} Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.286657 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d5bdfb6-rcrpb" podStartSLOduration=2.286641227 podStartE2EDuration="2.286641227s" podCreationTimestamp="2025-11-25 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:03:08.277413332 +0000 UTC m=+962.765644591" watchObservedRunningTime="2025-11-25 07:03:08.286641227 +0000 UTC m=+962.774872487" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.316102 4482 scope.go:117] "RemoveContainer" containerID="5fd5e027f4eb47d3c4285d77e8a57e26612dcfb287ede57b9eed8f85d41224f3" Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.357903 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64465d5ccf-x2spj"] Nov 25 07:03:08 crc kubenswrapper[4482]: I1125 07:03:08.365962 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64465d5ccf-x2spj"] Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.117330 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.117702 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.117761 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.118915 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6be423e1d99d845691f688b98451ff731b0a6e0f033aa86bb907250d322d441c"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.118990 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://6be423e1d99d845691f688b98451ff731b0a6e0f033aa86bb907250d322d441c" gracePeriod=600 Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.298064 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-654f7c9478-k6sqw"] Nov 25 07:03:09 crc kubenswrapper[4482]: E1125 07:03:09.298711 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad0d471-5ace-4679-ab4c-e8d8a1d64462" containerName="dnsmasq-dns" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.299374 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad0d471-5ace-4679-ab4c-e8d8a1d64462" containerName="dnsmasq-dns" Nov 25 07:03:09 crc kubenswrapper[4482]: E1125 07:03:09.299476 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad0d471-5ace-4679-ab4c-e8d8a1d64462" containerName="init" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.299535 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad0d471-5ace-4679-ab4c-e8d8a1d64462" containerName="init" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.299845 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad0d471-5ace-4679-ab4c-e8d8a1d64462" containerName="dnsmasq-dns" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.301302 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.304504 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.304738 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.319980 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-654f7c9478-k6sqw"] Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.327743 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="6be423e1d99d845691f688b98451ff731b0a6e0f033aa86bb907250d322d441c" exitCode=0 Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.328767 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"6be423e1d99d845691f688b98451ff731b0a6e0f033aa86bb907250d322d441c"} Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.502987 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-public-tls-certs\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.503084 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3334963-38da-4fd1-a89e-029174ff01ce-logs\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.503141 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6g4n\" (UniqueName: \"kubernetes.io/projected/e3334963-38da-4fd1-a89e-029174ff01ce-kube-api-access-w6g4n\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.503291 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-internal-tls-certs\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.503365 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-config-data-custom\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.503491 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-combined-ca-bundle\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.503733 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-config-data\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.606631 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3334963-38da-4fd1-a89e-029174ff01ce-logs\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.606731 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6g4n\" (UniqueName: \"kubernetes.io/projected/e3334963-38da-4fd1-a89e-029174ff01ce-kube-api-access-w6g4n\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.606777 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-internal-tls-certs\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.606805 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-config-data-custom\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.606871 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-combined-ca-bundle\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.607015 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-config-data\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.607060 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-public-tls-certs\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.608127 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3334963-38da-4fd1-a89e-029174ff01ce-logs\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.626308 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-public-tls-certs\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.629911 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-combined-ca-bundle\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.637675 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-internal-tls-certs\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.638351 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-config-data-custom\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.645912 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6g4n\" (UniqueName: \"kubernetes.io/projected/e3334963-38da-4fd1-a89e-029174ff01ce-kube-api-access-w6g4n\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.647184 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3334963-38da-4fd1-a89e-029174ff01ce-config-data\") pod \"barbican-api-654f7c9478-k6sqw\" (UID: \"e3334963-38da-4fd1-a89e-029174ff01ce\") " pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.841803 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad0d471-5ace-4679-ab4c-e8d8a1d64462" path="/var/lib/kubelet/pods/bad0d471-5ace-4679-ab4c-e8d8a1d64462/volumes" Nov 25 07:03:09 crc kubenswrapper[4482]: I1125 07:03:09.927488 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:12 crc kubenswrapper[4482]: I1125 07:03:12.106754 4482 scope.go:117] "RemoveContainer" containerID="851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7" Nov 25 07:03:12 crc kubenswrapper[4482]: E1125 07:03:12.114741 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7\": container with ID starting with 851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7 not found: ID does not exist" containerID="851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7" Nov 25 07:03:12 crc kubenswrapper[4482]: I1125 07:03:12.114779 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7"} err="failed to get container status \"851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7\": rpc error: code = NotFound desc = could not find container \"851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7\": container with ID starting with 851eae887ce90050e101c4db99b483cd2e867f28dcd69676d629c370cbbcf5b7 not found: ID does not exist" Nov 25 07:03:12 crc kubenswrapper[4482]: I1125 07:03:12.114804 4482 scope.go:117] "RemoveContainer" containerID="5fd5e027f4eb47d3c4285d77e8a57e26612dcfb287ede57b9eed8f85d41224f3" Nov 25 07:03:12 crc kubenswrapper[4482]: E1125 07:03:12.115105 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd5e027f4eb47d3c4285d77e8a57e26612dcfb287ede57b9eed8f85d41224f3\": container with ID starting with 5fd5e027f4eb47d3c4285d77e8a57e26612dcfb287ede57b9eed8f85d41224f3 not found: ID does not exist" containerID="5fd5e027f4eb47d3c4285d77e8a57e26612dcfb287ede57b9eed8f85d41224f3" Nov 25 07:03:12 crc kubenswrapper[4482]: I1125 07:03:12.115155 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd5e027f4eb47d3c4285d77e8a57e26612dcfb287ede57b9eed8f85d41224f3"} err="failed to get container status \"5fd5e027f4eb47d3c4285d77e8a57e26612dcfb287ede57b9eed8f85d41224f3\": rpc error: code = NotFound desc = could not find container \"5fd5e027f4eb47d3c4285d77e8a57e26612dcfb287ede57b9eed8f85d41224f3\": container with ID starting with 5fd5e027f4eb47d3c4285d77e8a57e26612dcfb287ede57b9eed8f85d41224f3 not found: ID does not exist" Nov 25 07:03:12 crc kubenswrapper[4482]: I1125 07:03:12.115206 4482 scope.go:117] "RemoveContainer" containerID="18fd7402468da26f930d0a283cd4f3dcbe4ac307cf8525f069560121b3739a9f" Nov 25 07:03:12 crc kubenswrapper[4482]: I1125 07:03:12.644024 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-654f7c9478-k6sqw"] Nov 25 07:03:12 crc kubenswrapper[4482]: W1125 07:03:12.654448 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3334963_38da_4fd1_a89e_029174ff01ce.slice/crio-154759d3b3c0cd5d3be43f41c4d65f261ebe8cdd4f0c208b078101680c7e3ccf WatchSource:0}: Error finding container 154759d3b3c0cd5d3be43f41c4d65f261ebe8cdd4f0c208b078101680c7e3ccf: Status 404 returned error can't find the container with id 154759d3b3c0cd5d3be43f41c4d65f261ebe8cdd4f0c208b078101680c7e3ccf Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.433607 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" event={"ID":"22ad88d8-cb8a-4137-b2d6-f8e787a1526b","Type":"ContainerStarted","Data":"1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3"} Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.434142 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.439462 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"74ac51368ca9a85524d27db3fb42de85573ff45ef8883e47eb5fe2759d039e48"} Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.441158 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2c0ac8f-2b76-45a3-af85-5990913bc03a","Type":"ContainerStarted","Data":"40be42855bfac49bec1255396dd5e074aaecd8d028edf160e46ceab36f50c2dd"} Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.449300 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" event={"ID":"9e253dad-2e3d-429d-a925-32520246a162","Type":"ContainerStarted","Data":"86ae002730f99ad4c3fa77537361fdfca23ef9fb82dea533df70e862592c6d76"} Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.449351 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" event={"ID":"9e253dad-2e3d-429d-a925-32520246a162","Type":"ContainerStarted","Data":"076f32403bf6c7a38fde3a36b307b924b3ee1aca13755ec509cf6cb50c75aba0"} Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.456396 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-654f7c9478-k6sqw" event={"ID":"e3334963-38da-4fd1-a89e-029174ff01ce","Type":"ContainerStarted","Data":"e5ce989151edb0080fe68ead971fa7c180d30fb86b6caeb8efcd29ed3472987f"} Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.456428 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-654f7c9478-k6sqw" event={"ID":"e3334963-38da-4fd1-a89e-029174ff01ce","Type":"ContainerStarted","Data":"82b25460fea94c1176cacb03e692c13c5d4f7deeebe861575c6da54e8afb1cde"} Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.456438 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-654f7c9478-k6sqw" event={"ID":"e3334963-38da-4fd1-a89e-029174ff01ce","Type":"ContainerStarted","Data":"154759d3b3c0cd5d3be43f41c4d65f261ebe8cdd4f0c208b078101680c7e3ccf"} Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.456931 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.456960 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.465903 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" podStartSLOduration=7.465864328 podStartE2EDuration="7.465864328s" podCreationTimestamp="2025-11-25 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:03:13.448504141 +0000 UTC m=+967.936735400" watchObservedRunningTime="2025-11-25 07:03:13.465864328 +0000 UTC m=+967.954095588" Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.473663 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-76d7bb49f8-pmvwn" podStartSLOduration=2.676335076 podStartE2EDuration="7.473644284s" podCreationTimestamp="2025-11-25 07:03:06 +0000 UTC" firstStartedPulling="2025-11-25 07:03:07.345746296 +0000 UTC m=+961.833977544" lastFinishedPulling="2025-11-25 07:03:12.143055493 +0000 UTC m=+966.631286752" observedRunningTime="2025-11-25 07:03:13.468908997 +0000 UTC m=+967.957140247" watchObservedRunningTime="2025-11-25 07:03:13.473644284 +0000 UTC m=+967.961875544" Nov 25 07:03:13 crc kubenswrapper[4482]: I1125 07:03:13.522386 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-654f7c9478-k6sqw" podStartSLOduration=4.522366853 podStartE2EDuration="4.522366853s" podCreationTimestamp="2025-11-25 07:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:03:13.505149747 +0000 UTC m=+967.993381006" watchObservedRunningTime="2025-11-25 07:03:13.522366853 +0000 UTC m=+968.010598112" Nov 25 07:03:14 crc kubenswrapper[4482]: I1125 07:03:14.371483 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:15 crc kubenswrapper[4482]: I1125 07:03:15.558486 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:17 crc kubenswrapper[4482]: I1125 07:03:17.006192 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:03:17 crc kubenswrapper[4482]: I1125 07:03:17.075805 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698b5d6cf7-cn5k5"] Nov 25 07:03:17 crc kubenswrapper[4482]: I1125 07:03:17.076024 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" podUID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" containerName="dnsmasq-dns" containerID="cri-o://cc9522752075bcba35687a9363077030121b0489413b1b8a70a9aecd148b1783" gracePeriod=10 Nov 25 07:03:17 crc kubenswrapper[4482]: I1125 07:03:17.504571 4482 generic.go:334] "Generic (PLEG): container finished" podID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" containerID="cc9522752075bcba35687a9363077030121b0489413b1b8a70a9aecd148b1783" exitCode=0 Nov 25 07:03:17 crc kubenswrapper[4482]: I1125 07:03:17.504626 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" event={"ID":"9ed040b0-24c3-4b02-aefb-a7eaced9d994","Type":"ContainerDied","Data":"cc9522752075bcba35687a9363077030121b0489413b1b8a70a9aecd148b1783"} Nov 25 07:03:19 crc kubenswrapper[4482]: I1125 07:03:19.366612 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" podUID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Nov 25 07:03:21 crc kubenswrapper[4482]: I1125 07:03:21.233610 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:21 crc kubenswrapper[4482]: I1125 07:03:21.314849 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-654f7c9478-k6sqw" Nov 25 07:03:21 crc kubenswrapper[4482]: I1125 07:03:21.394526 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d5bdfb6-rcrpb"] Nov 25 07:03:21 crc kubenswrapper[4482]: I1125 07:03:21.394771 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d5bdfb6-rcrpb" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api-log" containerID="cri-o://1e7e4d192d5debf188e1f6b30d2cfb1d12e35abb739800240e449c14c6bc622a" gracePeriod=30 Nov 25 07:03:21 crc kubenswrapper[4482]: I1125 07:03:21.395135 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d5bdfb6-rcrpb" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api" containerID="cri-o://449faa6e58542e25505d87a3d11fea18084fce07b7ad5619c38376ef18de1515" gracePeriod=30 Nov 25 07:03:21 crc kubenswrapper[4482]: I1125 07:03:21.570419 4482 generic.go:334] "Generic (PLEG): container finished" podID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerID="1e7e4d192d5debf188e1f6b30d2cfb1d12e35abb739800240e449c14c6bc622a" exitCode=143 Nov 25 07:03:21 crc kubenswrapper[4482]: I1125 07:03:21.570923 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5bdfb6-rcrpb" event={"ID":"c0bc93ea-483e-4c8f-8985-eab0e30f44cb","Type":"ContainerDied","Data":"1e7e4d192d5debf188e1f6b30d2cfb1d12e35abb739800240e449c14c6bc622a"} Nov 25 07:03:24 crc kubenswrapper[4482]: I1125 07:03:24.366654 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" podUID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Nov 25 07:03:24 crc kubenswrapper[4482]: I1125 07:03:24.569124 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5bdfb6-rcrpb" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:35352->10.217.0.160:9311: read: connection reset by peer" Nov 25 07:03:24 crc kubenswrapper[4482]: I1125 07:03:24.569151 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5bdfb6-rcrpb" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:35360->10.217.0.160:9311: read: connection reset by peer" Nov 25 07:03:25 crc kubenswrapper[4482]: I1125 07:03:25.664130 4482 generic.go:334] "Generic (PLEG): container finished" podID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerID="449faa6e58542e25505d87a3d11fea18084fce07b7ad5619c38376ef18de1515" exitCode=0 Nov 25 07:03:25 crc kubenswrapper[4482]: I1125 07:03:25.664434 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5bdfb6-rcrpb" event={"ID":"c0bc93ea-483e-4c8f-8985-eab0e30f44cb","Type":"ContainerDied","Data":"449faa6e58542e25505d87a3d11fea18084fce07b7ad5619c38376ef18de1515"} Nov 25 07:03:27 crc kubenswrapper[4482]: I1125 07:03:27.131585 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5bdfb6-rcrpb" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: connect: connection refused" Nov 25 07:03:27 crc kubenswrapper[4482]: I1125 07:03:27.131712 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5bdfb6-rcrpb" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: connect: connection refused" Nov 25 07:03:27 crc kubenswrapper[4482]: I1125 07:03:27.866866 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:03:28 crc kubenswrapper[4482]: E1125 07:03:28.098196 4482 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:1f5c0439f2433cb462b222a5bb23e629" Nov 25 07:03:28 crc kubenswrapper[4482]: E1125 07:03:28.098248 4482 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:1f5c0439f2433cb462b222a5bb23e629" Nov 25 07:03:28 crc kubenswrapper[4482]: E1125 07:03:28.098382 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:1f5c0439f2433cb462b222a5bb23e629,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gswq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-z8dgz_openstack(6d25c491-a613-4f52-8cb8-95d689bc3000): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:03:28 crc kubenswrapper[4482]: E1125 07:03:28.101129 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-z8dgz" podUID="6d25c491-a613-4f52-8cb8-95d689bc3000" Nov 25 07:03:28 crc kubenswrapper[4482]: E1125 07:03:28.687974 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:1f5c0439f2433cb462b222a5bb23e629\\\"\"" pod="openstack/glance-db-sync-z8dgz" podUID="6d25c491-a613-4f52-8cb8-95d689bc3000" Nov 25 07:03:29 crc kubenswrapper[4482]: I1125 07:03:29.426284 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5d96b8fb8d-vbp24" Nov 25 07:03:29 crc kubenswrapper[4482]: I1125 07:03:29.788324 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-656dff569f-qv7tq" Nov 25 07:03:29 crc kubenswrapper[4482]: I1125 07:03:29.862909 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7954648f5b-fkx6n"] Nov 25 07:03:29 crc kubenswrapper[4482]: I1125 07:03:29.863155 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7954648f5b-fkx6n" podUID="d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" containerName="neutron-api" containerID="cri-o://d319bd6243db1ab6315e7d46fc566168b5ec6feabb196f82187025ad9cd4cc34" gracePeriod=30 Nov 25 07:03:29 crc kubenswrapper[4482]: I1125 07:03:29.863226 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7954648f5b-fkx6n" podUID="d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" containerName="neutron-httpd" containerID="cri-o://4b4d254252b63fc75295f16e2baa703a7e8aa76b21e18ddacbfd21e58cc389b7" gracePeriod=30 Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.184670 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.186152 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.189538 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.190992 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.191322 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-78v5p" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.202749 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.234107 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac4e9f57-0830-4b4e-9544-6f38309646f7-openstack-config-secret\") pod \"openstackclient\" (UID: \"ac4e9f57-0830-4b4e-9544-6f38309646f7\") " pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.234282 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac4e9f57-0830-4b4e-9544-6f38309646f7-openstack-config\") pod \"openstackclient\" (UID: \"ac4e9f57-0830-4b4e-9544-6f38309646f7\") " pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.234332 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btnmm\" (UniqueName: \"kubernetes.io/projected/ac4e9f57-0830-4b4e-9544-6f38309646f7-kube-api-access-btnmm\") pod \"openstackclient\" (UID: \"ac4e9f57-0830-4b4e-9544-6f38309646f7\") " pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.234371 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac4e9f57-0830-4b4e-9544-6f38309646f7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ac4e9f57-0830-4b4e-9544-6f38309646f7\") " pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.335887 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac4e9f57-0830-4b4e-9544-6f38309646f7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ac4e9f57-0830-4b4e-9544-6f38309646f7\") " pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.335990 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac4e9f57-0830-4b4e-9544-6f38309646f7-openstack-config-secret\") pod \"openstackclient\" (UID: \"ac4e9f57-0830-4b4e-9544-6f38309646f7\") " pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.336052 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac4e9f57-0830-4b4e-9544-6f38309646f7-openstack-config\") pod \"openstackclient\" (UID: \"ac4e9f57-0830-4b4e-9544-6f38309646f7\") " pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.336071 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btnmm\" (UniqueName: \"kubernetes.io/projected/ac4e9f57-0830-4b4e-9544-6f38309646f7-kube-api-access-btnmm\") pod \"openstackclient\" (UID: \"ac4e9f57-0830-4b4e-9544-6f38309646f7\") " pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.337926 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ac4e9f57-0830-4b4e-9544-6f38309646f7-openstack-config\") pod \"openstackclient\" (UID: \"ac4e9f57-0830-4b4e-9544-6f38309646f7\") " pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.342041 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac4e9f57-0830-4b4e-9544-6f38309646f7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ac4e9f57-0830-4b4e-9544-6f38309646f7\") " pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.342652 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ac4e9f57-0830-4b4e-9544-6f38309646f7-openstack-config-secret\") pod \"openstackclient\" (UID: \"ac4e9f57-0830-4b4e-9544-6f38309646f7\") " pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.351474 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btnmm\" (UniqueName: \"kubernetes.io/projected/ac4e9f57-0830-4b4e-9544-6f38309646f7-kube-api-access-btnmm\") pod \"openstackclient\" (UID: \"ac4e9f57-0830-4b4e-9544-6f38309646f7\") " pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.537148 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.720781 4482 generic.go:334] "Generic (PLEG): container finished" podID="d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" containerID="4b4d254252b63fc75295f16e2baa703a7e8aa76b21e18ddacbfd21e58cc389b7" exitCode=0 Nov 25 07:03:30 crc kubenswrapper[4482]: I1125 07:03:30.720865 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7954648f5b-fkx6n" event={"ID":"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0","Type":"ContainerDied","Data":"4b4d254252b63fc75295f16e2baa703a7e8aa76b21e18ddacbfd21e58cc389b7"} Nov 25 07:03:33 crc kubenswrapper[4482]: I1125 07:03:33.524362 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:33 crc kubenswrapper[4482]: I1125 07:03:33.553297 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-766dfbbcb6-85kbc" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.367466 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" podUID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.368107 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.443744 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.569455 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-config-data\") pod \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.569499 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-config-data-custom\") pod \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.569609 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mccj\" (UniqueName: \"kubernetes.io/projected/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-kube-api-access-6mccj\") pod \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.569830 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-logs\") pod \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.569889 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-combined-ca-bundle\") pod \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\" (UID: \"c0bc93ea-483e-4c8f-8985-eab0e30f44cb\") " Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.570404 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-logs" (OuterVolumeSpecName: "logs") pod "c0bc93ea-483e-4c8f-8985-eab0e30f44cb" (UID: "c0bc93ea-483e-4c8f-8985-eab0e30f44cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.571312 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.579335 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-kube-api-access-6mccj" (OuterVolumeSpecName: "kube-api-access-6mccj") pod "c0bc93ea-483e-4c8f-8985-eab0e30f44cb" (UID: "c0bc93ea-483e-4c8f-8985-eab0e30f44cb"). InnerVolumeSpecName "kube-api-access-6mccj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.592345 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c0bc93ea-483e-4c8f-8985-eab0e30f44cb" (UID: "c0bc93ea-483e-4c8f-8985-eab0e30f44cb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.602497 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0bc93ea-483e-4c8f-8985-eab0e30f44cb" (UID: "c0bc93ea-483e-4c8f-8985-eab0e30f44cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.628279 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-config-data" (OuterVolumeSpecName: "config-data") pod "c0bc93ea-483e-4c8f-8985-eab0e30f44cb" (UID: "c0bc93ea-483e-4c8f-8985-eab0e30f44cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.675093 4482 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.675127 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.675137 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mccj\" (UniqueName: \"kubernetes.io/projected/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-kube-api-access-6mccj\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.675155 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0bc93ea-483e-4c8f-8985-eab0e30f44cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.788343 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5bdfb6-rcrpb" event={"ID":"c0bc93ea-483e-4c8f-8985-eab0e30f44cb","Type":"ContainerDied","Data":"9f762b74981288ce6af2cc5260534445319066c6028c69e67110c4fbe9dffdb7"} Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.788400 4482 scope.go:117] "RemoveContainer" containerID="449faa6e58542e25505d87a3d11fea18084fce07b7ad5619c38376ef18de1515" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.788561 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d5bdfb6-rcrpb" Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.824921 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d5bdfb6-rcrpb"] Nov 25 07:03:34 crc kubenswrapper[4482]: I1125 07:03:34.842878 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5d5bdfb6-rcrpb"] Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.011565 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:03:35 crc kubenswrapper[4482]: E1125 07:03:35.011638 4482 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:1f5c0439f2433cb462b222a5bb23e629" Nov 25 07:03:35 crc kubenswrapper[4482]: E1125 07:03:35.011679 4482 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:1f5c0439f2433cb462b222a5bb23e629" Nov 25 07:03:35 crc kubenswrapper[4482]: E1125 07:03:35.011836 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:1f5c0439f2433cb462b222a5bb23e629,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jcz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-v2dqt_openstack(3e50321d-a59a-4d39-a485-4299ced13bdc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:03:35 crc kubenswrapper[4482]: E1125 07:03:35.013187 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-v2dqt" podUID="3e50321d-a59a-4d39-a485-4299ced13bdc" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.080445 4482 scope.go:117] "RemoveContainer" containerID="1e7e4d192d5debf188e1f6b30d2cfb1d12e35abb739800240e449c14c6bc622a" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.088991 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-dns-swift-storage-0\") pod \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.089075 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-config\") pod \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.089117 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-ovsdbserver-nb\") pod \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.089153 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-ovsdbserver-sb\") pod \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.089309 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzk7q\" (UniqueName: \"kubernetes.io/projected/9ed040b0-24c3-4b02-aefb-a7eaced9d994-kube-api-access-vzk7q\") pod \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.089398 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-dns-svc\") pod \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\" (UID: \"9ed040b0-24c3-4b02-aefb-a7eaced9d994\") " Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.097833 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed040b0-24c3-4b02-aefb-a7eaced9d994-kube-api-access-vzk7q" (OuterVolumeSpecName: "kube-api-access-vzk7q") pod "9ed040b0-24c3-4b02-aefb-a7eaced9d994" (UID: "9ed040b0-24c3-4b02-aefb-a7eaced9d994"). InnerVolumeSpecName "kube-api-access-vzk7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.191107 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzk7q\" (UniqueName: \"kubernetes.io/projected/9ed040b0-24c3-4b02-aefb-a7eaced9d994-kube-api-access-vzk7q\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.216959 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ed040b0-24c3-4b02-aefb-a7eaced9d994" (UID: "9ed040b0-24c3-4b02-aefb-a7eaced9d994"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.217152 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ed040b0-24c3-4b02-aefb-a7eaced9d994" (UID: "9ed040b0-24c3-4b02-aefb-a7eaced9d994"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.217750 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ed040b0-24c3-4b02-aefb-a7eaced9d994" (UID: "9ed040b0-24c3-4b02-aefb-a7eaced9d994"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.218163 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ed040b0-24c3-4b02-aefb-a7eaced9d994" (UID: "9ed040b0-24c3-4b02-aefb-a7eaced9d994"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.266592 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-config" (OuterVolumeSpecName: "config") pod "9ed040b0-24c3-4b02-aefb-a7eaced9d994" (UID: "9ed040b0-24c3-4b02-aefb-a7eaced9d994"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.292699 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.292728 4482 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.292741 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.292750 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.292759 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ed040b0-24c3-4b02-aefb-a7eaced9d994-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.737806 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 07:03:35 crc kubenswrapper[4482]: W1125 07:03:35.767509 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac4e9f57_0830_4b4e_9544_6f38309646f7.slice/crio-6382f61f591357f3e6b61790b902ca5b3e9aff96db42f7613cc0ba645da776ae WatchSource:0}: Error finding container 6382f61f591357f3e6b61790b902ca5b3e9aff96db42f7613cc0ba645da776ae: Status 404 returned error can't find the container with id 6382f61f591357f3e6b61790b902ca5b3e9aff96db42f7613cc0ba645da776ae Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.812474 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2c0ac8f-2b76-45a3-af85-5990913bc03a","Type":"ContainerStarted","Data":"263e528ee7c793c546f9a438b4f1ef055b77e1781dd02fdce8655af5d75c9bb1"} Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.817359 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7949b4656d-jjsj8" event={"ID":"e5634033-0ed5-4a52-9d37-a52ce07e4f50","Type":"ContainerStarted","Data":"726e2ebeace6dd67c29b5132b6b4f0dc67aa3a05ba9b9532ab3e2f9331430308"} Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.827833 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ac4e9f57-0830-4b4e-9544-6f38309646f7","Type":"ContainerStarted","Data":"6382f61f591357f3e6b61790b902ca5b3e9aff96db42f7613cc0ba645da776ae"} Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.841403 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" Nov 25 07:03:35 crc kubenswrapper[4482]: E1125 07:03:35.855877 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:1f5c0439f2433cb462b222a5bb23e629\\\"\"" pod="openstack/heat-db-sync-v2dqt" podUID="3e50321d-a59a-4d39-a485-4299ced13bdc" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.856882 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" path="/var/lib/kubelet/pods/c0bc93ea-483e-4c8f-8985-eab0e30f44cb/volumes" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.857976 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" event={"ID":"9ed040b0-24c3-4b02-aefb-a7eaced9d994","Type":"ContainerDied","Data":"0ef24f5f8612b502168a39bd41cbbe205644825fd8f30e6839570a06ce3ea645"} Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.858014 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76cc5bdc65-wzwtb" event={"ID":"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1","Type":"ContainerStarted","Data":"e2654ff2424d40b9a2887f182e4139b04d1150ed18bc38868daa6caac58a4b4d"} Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.858036 4482 scope.go:117] "RemoveContainer" containerID="cc9522752075bcba35687a9363077030121b0489413b1b8a70a9aecd148b1783" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.922196 4482 scope.go:117] "RemoveContainer" containerID="ce8237e8475643c4c94365f50613555290c1393f92607459d516c0e107d255cb" Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.934580 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698b5d6cf7-cn5k5"] Nov 25 07:03:35 crc kubenswrapper[4482]: I1125 07:03:35.947811 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698b5d6cf7-cn5k5"] Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.933254 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7546474697-whwwz" event={"ID":"dcd265f5-a03a-4b85-a287-e76a93ce3310","Type":"ContainerStarted","Data":"26c61290c4cd7777582ac069e2bd2a50a5727e75f1240e03c62f6183d28cebbb"} Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.933671 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7546474697-whwwz" event={"ID":"dcd265f5-a03a-4b85-a287-e76a93ce3310","Type":"ContainerStarted","Data":"cd1be4a86489ab07c09c5117634d7443a0b7d92414f8934292567103557354e8"} Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.954636 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d554fc8c-f2fdb" event={"ID":"961bd3cf-55d9-48b0-8f63-a8c2c2942c41","Type":"ContainerStarted","Data":"bf0cd49e922ceff1640de3610179e0e09e5d4ee50f2cce197953afece2e60fa9"} Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.954735 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d554fc8c-f2fdb" event={"ID":"961bd3cf-55d9-48b0-8f63-a8c2c2942c41","Type":"ContainerStarted","Data":"8aeeb04d8a45f0028a1578da836ad37e2c561b145a97c16a4cbc933b4edbc209"} Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.954967 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d554fc8c-f2fdb" podUID="961bd3cf-55d9-48b0-8f63-a8c2c2942c41" containerName="horizon-log" containerID="cri-o://8aeeb04d8a45f0028a1578da836ad37e2c561b145a97c16a4cbc933b4edbc209" gracePeriod=30 Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.955080 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d554fc8c-f2fdb" podUID="961bd3cf-55d9-48b0-8f63-a8c2c2942c41" containerName="horizon" containerID="cri-o://bf0cd49e922ceff1640de3610179e0e09e5d4ee50f2cce197953afece2e60fa9" gracePeriod=30 Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.970093 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76cc5bdc65-wzwtb" podUID="a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" containerName="horizon-log" containerID="cri-o://e2654ff2424d40b9a2887f182e4139b04d1150ed18bc38868daa6caac58a4b4d" gracePeriod=30 Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.970271 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76cc5bdc65-wzwtb" podUID="a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" containerName="horizon" containerID="cri-o://1b58a6c9c63d02d1c03df8a3e99942660dc44a9e6fee08f5d33872a90f509b15" gracePeriod=30 Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.971376 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76cc5bdc65-wzwtb" event={"ID":"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1","Type":"ContainerStarted","Data":"1b58a6c9c63d02d1c03df8a3e99942660dc44a9e6fee08f5d33872a90f509b15"} Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.976245 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7546474697-whwwz" podStartSLOduration=3.376668452 podStartE2EDuration="30.97622425s" podCreationTimestamp="2025-11-25 07:03:06 +0000 UTC" firstStartedPulling="2025-11-25 07:03:07.396652952 +0000 UTC m=+961.884884211" lastFinishedPulling="2025-11-25 07:03:34.99620875 +0000 UTC m=+989.484440009" observedRunningTime="2025-11-25 07:03:36.960527087 +0000 UTC m=+991.448758347" watchObservedRunningTime="2025-11-25 07:03:36.97622425 +0000 UTC m=+991.464455508" Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.982741 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5666447f7c-7kf4h" event={"ID":"0204e2ef-b54e-40fd-a896-d366754a5b5f","Type":"ContainerStarted","Data":"bc8e586857a5aa46d535df56f5ad048383cb1a5f158552d4efc1df3f74d3c7f6"} Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.982786 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5666447f7c-7kf4h" event={"ID":"0204e2ef-b54e-40fd-a896-d366754a5b5f","Type":"ContainerStarted","Data":"9e1af7d92fe34ad17e33f0c96dc29aa6a0740ebd19190c7322558bd36252afa8"} Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.982928 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5666447f7c-7kf4h" podUID="0204e2ef-b54e-40fd-a896-d366754a5b5f" containerName="horizon-log" containerID="cri-o://9e1af7d92fe34ad17e33f0c96dc29aa6a0740ebd19190c7322558bd36252afa8" gracePeriod=30 Nov 25 07:03:36 crc kubenswrapper[4482]: I1125 07:03:36.983046 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5666447f7c-7kf4h" podUID="0204e2ef-b54e-40fd-a896-d366754a5b5f" containerName="horizon" containerID="cri-o://bc8e586857a5aa46d535df56f5ad048383cb1a5f158552d4efc1df3f74d3c7f6" gracePeriod=30 Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.000032 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fbb9df54d-nfljm" event={"ID":"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db","Type":"ContainerStarted","Data":"b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c"} Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.000086 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fbb9df54d-nfljm" event={"ID":"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db","Type":"ContainerStarted","Data":"bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912"} Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.010698 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7949b4656d-jjsj8" event={"ID":"e5634033-0ed5-4a52-9d37-a52ce07e4f50","Type":"ContainerStarted","Data":"d1af5bb5fc116dd56aec67defc0854642fcf477f51e82bc36dc33eaad2976646"} Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.012868 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78d554fc8c-f2fdb" podStartSLOduration=3.195749859 podStartE2EDuration="53.012850245s" podCreationTimestamp="2025-11-25 07:02:44 +0000 UTC" firstStartedPulling="2025-11-25 07:02:45.286257117 +0000 UTC m=+939.774488376" lastFinishedPulling="2025-11-25 07:03:35.103357512 +0000 UTC m=+989.591588762" observedRunningTime="2025-11-25 07:03:36.987282756 +0000 UTC m=+991.475514015" watchObservedRunningTime="2025-11-25 07:03:37.012850245 +0000 UTC m=+991.501081504" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.036519 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5666447f7c-7kf4h" podStartSLOduration=4.111916265 podStartE2EDuration="56.036505709s" podCreationTimestamp="2025-11-25 07:02:41 +0000 UTC" firstStartedPulling="2025-11-25 07:02:43.157393763 +0000 UTC m=+937.645625023" lastFinishedPulling="2025-11-25 07:03:35.081983208 +0000 UTC m=+989.570214467" observedRunningTime="2025-11-25 07:03:37.021240733 +0000 UTC m=+991.509471992" watchObservedRunningTime="2025-11-25 07:03:37.036505709 +0000 UTC m=+991.524736968" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.054157 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-76cc5bdc65-wzwtb" podStartSLOduration=4.275209331 podStartE2EDuration="57.054133791s" podCreationTimestamp="2025-11-25 07:02:40 +0000 UTC" firstStartedPulling="2025-11-25 07:02:42.303376549 +0000 UTC m=+936.791607807" lastFinishedPulling="2025-11-25 07:03:35.082301008 +0000 UTC m=+989.570532267" observedRunningTime="2025-11-25 07:03:37.043663482 +0000 UTC m=+991.531894742" watchObservedRunningTime="2025-11-25 07:03:37.054133791 +0000 UTC m=+991.542365050" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.084732 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jcvz8"] Nov 25 07:03:37 crc kubenswrapper[4482]: E1125 07:03:37.085196 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api-log" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.085213 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api-log" Nov 25 07:03:37 crc kubenswrapper[4482]: E1125 07:03:37.085252 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" containerName="dnsmasq-dns" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.085260 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" containerName="dnsmasq-dns" Nov 25 07:03:37 crc kubenswrapper[4482]: E1125 07:03:37.085287 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" containerName="init" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.085293 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" containerName="init" Nov 25 07:03:37 crc kubenswrapper[4482]: E1125 07:03:37.085304 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.085310 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.085515 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" containerName="dnsmasq-dns" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.085535 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.085552 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api-log" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.086491 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jcvz8" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.132948 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jcvz8"] Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.133036 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5bdfb6-rcrpb" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.133402 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5bdfb6-rcrpb" podUID="c0bc93ea-483e-4c8f-8985-eab0e30f44cb" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.146647 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7949b4656d-jjsj8" podStartSLOduration=3.719315756 podStartE2EDuration="46.146624531s" podCreationTimestamp="2025-11-25 07:02:51 +0000 UTC" firstStartedPulling="2025-11-25 07:02:52.602113979 +0000 UTC m=+947.090345238" lastFinishedPulling="2025-11-25 07:03:35.029422754 +0000 UTC m=+989.517654013" observedRunningTime="2025-11-25 07:03:37.079924902 +0000 UTC m=+991.568156160" watchObservedRunningTime="2025-11-25 07:03:37.146624531 +0000 UTC m=+991.634855790" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.156899 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69v9c\" (UniqueName: \"kubernetes.io/projected/d4820888-8372-4ac2-b8bd-f6d5f1f64770-kube-api-access-69v9c\") pod \"nova-api-db-create-jcvz8\" (UID: \"d4820888-8372-4ac2-b8bd-f6d5f1f64770\") " pod="openstack/nova-api-db-create-jcvz8" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.157094 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4820888-8372-4ac2-b8bd-f6d5f1f64770-operator-scripts\") pod \"nova-api-db-create-jcvz8\" (UID: \"d4820888-8372-4ac2-b8bd-f6d5f1f64770\") " pod="openstack/nova-api-db-create-jcvz8" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.159321 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5fbb9df54d-nfljm" podStartSLOduration=3.306337912 podStartE2EDuration="46.159295396s" podCreationTimestamp="2025-11-25 07:02:51 +0000 UTC" firstStartedPulling="2025-11-25 07:02:52.227733989 +0000 UTC m=+946.715965248" lastFinishedPulling="2025-11-25 07:03:35.080691473 +0000 UTC m=+989.568922732" observedRunningTime="2025-11-25 07:03:37.120492337 +0000 UTC m=+991.608723586" watchObservedRunningTime="2025-11-25 07:03:37.159295396 +0000 UTC m=+991.647526656" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.212214 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-fvmzq"] Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.214825 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fvmzq" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.243488 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fvmzq"] Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.268122 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e8cc-account-create-hf9xd"] Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.269590 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e8cc-account-create-hf9xd" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.271723 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.279950 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69v9c\" (UniqueName: \"kubernetes.io/projected/d4820888-8372-4ac2-b8bd-f6d5f1f64770-kube-api-access-69v9c\") pod \"nova-api-db-create-jcvz8\" (UID: \"d4820888-8372-4ac2-b8bd-f6d5f1f64770\") " pod="openstack/nova-api-db-create-jcvz8" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.280307 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4820888-8372-4ac2-b8bd-f6d5f1f64770-operator-scripts\") pod \"nova-api-db-create-jcvz8\" (UID: \"d4820888-8372-4ac2-b8bd-f6d5f1f64770\") " pod="openstack/nova-api-db-create-jcvz8" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.280709 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmt4x\" (UniqueName: \"kubernetes.io/projected/cae725f0-8063-4795-bbee-c00ee44a38b8-kube-api-access-xmt4x\") pod \"nova-cell0-db-create-fvmzq\" (UID: \"cae725f0-8063-4795-bbee-c00ee44a38b8\") " pod="openstack/nova-cell0-db-create-fvmzq" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.280802 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae725f0-8063-4795-bbee-c00ee44a38b8-operator-scripts\") pod \"nova-cell0-db-create-fvmzq\" (UID: \"cae725f0-8063-4795-bbee-c00ee44a38b8\") " pod="openstack/nova-cell0-db-create-fvmzq" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.281276 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4820888-8372-4ac2-b8bd-f6d5f1f64770-operator-scripts\") pod \"nova-api-db-create-jcvz8\" (UID: \"d4820888-8372-4ac2-b8bd-f6d5f1f64770\") " pod="openstack/nova-api-db-create-jcvz8" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.292929 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e8cc-account-create-hf9xd"] Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.384056 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e617f2ae-a16c-405e-b79a-5331a8884588-operator-scripts\") pod \"nova-api-e8cc-account-create-hf9xd\" (UID: \"e617f2ae-a16c-405e-b79a-5331a8884588\") " pod="openstack/nova-api-e8cc-account-create-hf9xd" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.384104 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxjxk\" (UniqueName: \"kubernetes.io/projected/e617f2ae-a16c-405e-b79a-5331a8884588-kube-api-access-pxjxk\") pod \"nova-api-e8cc-account-create-hf9xd\" (UID: \"e617f2ae-a16c-405e-b79a-5331a8884588\") " pod="openstack/nova-api-e8cc-account-create-hf9xd" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.384221 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmt4x\" (UniqueName: \"kubernetes.io/projected/cae725f0-8063-4795-bbee-c00ee44a38b8-kube-api-access-xmt4x\") pod \"nova-cell0-db-create-fvmzq\" (UID: \"cae725f0-8063-4795-bbee-c00ee44a38b8\") " pod="openstack/nova-cell0-db-create-fvmzq" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.384252 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae725f0-8063-4795-bbee-c00ee44a38b8-operator-scripts\") pod \"nova-cell0-db-create-fvmzq\" (UID: \"cae725f0-8063-4795-bbee-c00ee44a38b8\") " pod="openstack/nova-cell0-db-create-fvmzq" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.385110 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae725f0-8063-4795-bbee-c00ee44a38b8-operator-scripts\") pod \"nova-cell0-db-create-fvmzq\" (UID: \"cae725f0-8063-4795-bbee-c00ee44a38b8\") " pod="openstack/nova-cell0-db-create-fvmzq" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.392852 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69v9c\" (UniqueName: \"kubernetes.io/projected/d4820888-8372-4ac2-b8bd-f6d5f1f64770-kube-api-access-69v9c\") pod \"nova-api-db-create-jcvz8\" (UID: \"d4820888-8372-4ac2-b8bd-f6d5f1f64770\") " pod="openstack/nova-api-db-create-jcvz8" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.419947 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmt4x\" (UniqueName: \"kubernetes.io/projected/cae725f0-8063-4795-bbee-c00ee44a38b8-kube-api-access-xmt4x\") pod \"nova-cell0-db-create-fvmzq\" (UID: \"cae725f0-8063-4795-bbee-c00ee44a38b8\") " pod="openstack/nova-cell0-db-create-fvmzq" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.429934 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-tgdj7"] Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.432025 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tgdj7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.455008 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jcvz8" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.488368 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/754089e8-09b2-44ad-bdf7-ac4bb4871f3b-operator-scripts\") pod \"nova-cell1-db-create-tgdj7\" (UID: \"754089e8-09b2-44ad-bdf7-ac4bb4871f3b\") " pod="openstack/nova-cell1-db-create-tgdj7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.488456 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e617f2ae-a16c-405e-b79a-5331a8884588-operator-scripts\") pod \"nova-api-e8cc-account-create-hf9xd\" (UID: \"e617f2ae-a16c-405e-b79a-5331a8884588\") " pod="openstack/nova-api-e8cc-account-create-hf9xd" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.488480 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxjxk\" (UniqueName: \"kubernetes.io/projected/e617f2ae-a16c-405e-b79a-5331a8884588-kube-api-access-pxjxk\") pod \"nova-api-e8cc-account-create-hf9xd\" (UID: \"e617f2ae-a16c-405e-b79a-5331a8884588\") " pod="openstack/nova-api-e8cc-account-create-hf9xd" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.488547 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrsnq\" (UniqueName: \"kubernetes.io/projected/754089e8-09b2-44ad-bdf7-ac4bb4871f3b-kube-api-access-hrsnq\") pod \"nova-cell1-db-create-tgdj7\" (UID: \"754089e8-09b2-44ad-bdf7-ac4bb4871f3b\") " pod="openstack/nova-cell1-db-create-tgdj7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.489620 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e617f2ae-a16c-405e-b79a-5331a8884588-operator-scripts\") pod \"nova-api-e8cc-account-create-hf9xd\" (UID: \"e617f2ae-a16c-405e-b79a-5331a8884588\") " pod="openstack/nova-api-e8cc-account-create-hf9xd" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.492354 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-tgdj7"] Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.521892 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-383a-account-create-6z5m7"] Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.523127 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-383a-account-create-6z5m7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.526147 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxjxk\" (UniqueName: \"kubernetes.io/projected/e617f2ae-a16c-405e-b79a-5331a8884588-kube-api-access-pxjxk\") pod \"nova-api-e8cc-account-create-hf9xd\" (UID: \"e617f2ae-a16c-405e-b79a-5331a8884588\") " pod="openstack/nova-api-e8cc-account-create-hf9xd" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.534141 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.557718 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fvmzq" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.591522 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m8jv\" (UniqueName: \"kubernetes.io/projected/1d3410b4-a318-4018-85b3-1447b61ae0e5-kube-api-access-6m8jv\") pod \"nova-cell0-383a-account-create-6z5m7\" (UID: \"1d3410b4-a318-4018-85b3-1447b61ae0e5\") " pod="openstack/nova-cell0-383a-account-create-6z5m7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.591594 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/754089e8-09b2-44ad-bdf7-ac4bb4871f3b-operator-scripts\") pod \"nova-cell1-db-create-tgdj7\" (UID: \"754089e8-09b2-44ad-bdf7-ac4bb4871f3b\") " pod="openstack/nova-cell1-db-create-tgdj7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.591646 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3410b4-a318-4018-85b3-1447b61ae0e5-operator-scripts\") pod \"nova-cell0-383a-account-create-6z5m7\" (UID: \"1d3410b4-a318-4018-85b3-1447b61ae0e5\") " pod="openstack/nova-cell0-383a-account-create-6z5m7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.591667 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrsnq\" (UniqueName: \"kubernetes.io/projected/754089e8-09b2-44ad-bdf7-ac4bb4871f3b-kube-api-access-hrsnq\") pod \"nova-cell1-db-create-tgdj7\" (UID: \"754089e8-09b2-44ad-bdf7-ac4bb4871f3b\") " pod="openstack/nova-cell1-db-create-tgdj7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.592579 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/754089e8-09b2-44ad-bdf7-ac4bb4871f3b-operator-scripts\") pod \"nova-cell1-db-create-tgdj7\" (UID: \"754089e8-09b2-44ad-bdf7-ac4bb4871f3b\") " pod="openstack/nova-cell1-db-create-tgdj7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.615560 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-383a-account-create-6z5m7"] Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.618630 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e8cc-account-create-hf9xd" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.636730 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrsnq\" (UniqueName: \"kubernetes.io/projected/754089e8-09b2-44ad-bdf7-ac4bb4871f3b-kube-api-access-hrsnq\") pod \"nova-cell1-db-create-tgdj7\" (UID: \"754089e8-09b2-44ad-bdf7-ac4bb4871f3b\") " pod="openstack/nova-cell1-db-create-tgdj7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.702295 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m8jv\" (UniqueName: \"kubernetes.io/projected/1d3410b4-a318-4018-85b3-1447b61ae0e5-kube-api-access-6m8jv\") pod \"nova-cell0-383a-account-create-6z5m7\" (UID: \"1d3410b4-a318-4018-85b3-1447b61ae0e5\") " pod="openstack/nova-cell0-383a-account-create-6z5m7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.702589 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3410b4-a318-4018-85b3-1447b61ae0e5-operator-scripts\") pod \"nova-cell0-383a-account-create-6z5m7\" (UID: \"1d3410b4-a318-4018-85b3-1447b61ae0e5\") " pod="openstack/nova-cell0-383a-account-create-6z5m7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.714109 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-83e4-account-create-r6m84"] Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.715303 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83e4-account-create-r6m84" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.722666 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3410b4-a318-4018-85b3-1447b61ae0e5-operator-scripts\") pod \"nova-cell0-383a-account-create-6z5m7\" (UID: \"1d3410b4-a318-4018-85b3-1447b61ae0e5\") " pod="openstack/nova-cell0-383a-account-create-6z5m7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.728877 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.742799 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m8jv\" (UniqueName: \"kubernetes.io/projected/1d3410b4-a318-4018-85b3-1447b61ae0e5-kube-api-access-6m8jv\") pod \"nova-cell0-383a-account-create-6z5m7\" (UID: \"1d3410b4-a318-4018-85b3-1447b61ae0e5\") " pod="openstack/nova-cell0-383a-account-create-6z5m7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.770545 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-83e4-account-create-r6m84"] Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.805802 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82aeaff-100d-45a9-9694-aae65838cf91-operator-scripts\") pod \"nova-cell1-83e4-account-create-r6m84\" (UID: \"b82aeaff-100d-45a9-9694-aae65838cf91\") " pod="openstack/nova-cell1-83e4-account-create-r6m84" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.806068 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sjnb\" (UniqueName: \"kubernetes.io/projected/b82aeaff-100d-45a9-9694-aae65838cf91-kube-api-access-5sjnb\") pod \"nova-cell1-83e4-account-create-r6m84\" (UID: \"b82aeaff-100d-45a9-9694-aae65838cf91\") " pod="openstack/nova-cell1-83e4-account-create-r6m84" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.907719 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sjnb\" (UniqueName: \"kubernetes.io/projected/b82aeaff-100d-45a9-9694-aae65838cf91-kube-api-access-5sjnb\") pod \"nova-cell1-83e4-account-create-r6m84\" (UID: \"b82aeaff-100d-45a9-9694-aae65838cf91\") " pod="openstack/nova-cell1-83e4-account-create-r6m84" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.907875 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82aeaff-100d-45a9-9694-aae65838cf91-operator-scripts\") pod \"nova-cell1-83e4-account-create-r6m84\" (UID: \"b82aeaff-100d-45a9-9694-aae65838cf91\") " pod="openstack/nova-cell1-83e4-account-create-r6m84" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.908621 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82aeaff-100d-45a9-9694-aae65838cf91-operator-scripts\") pod \"nova-cell1-83e4-account-create-r6m84\" (UID: \"b82aeaff-100d-45a9-9694-aae65838cf91\") " pod="openstack/nova-cell1-83e4-account-create-r6m84" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.920453 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tgdj7" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.922705 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" path="/var/lib/kubelet/pods/9ed040b0-24c3-4b02-aefb-a7eaced9d994/volumes" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.946044 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sjnb\" (UniqueName: \"kubernetes.io/projected/b82aeaff-100d-45a9-9694-aae65838cf91-kube-api-access-5sjnb\") pod \"nova-cell1-83e4-account-create-r6m84\" (UID: \"b82aeaff-100d-45a9-9694-aae65838cf91\") " pod="openstack/nova-cell1-83e4-account-create-r6m84" Nov 25 07:03:37 crc kubenswrapper[4482]: I1125 07:03:37.952941 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-383a-account-create-6z5m7" Nov 25 07:03:38 crc kubenswrapper[4482]: I1125 07:03:38.086730 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83e4-account-create-r6m84" Nov 25 07:03:38 crc kubenswrapper[4482]: I1125 07:03:38.182575 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jcvz8"] Nov 25 07:03:38 crc kubenswrapper[4482]: W1125 07:03:38.254557 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4820888_8372_4ac2_b8bd_f6d5f1f64770.slice/crio-22b4ce867fbfa920f0ba8017ebbed0e7f18c679178c93dc0e903d4bfd0a29cef WatchSource:0}: Error finding container 22b4ce867fbfa920f0ba8017ebbed0e7f18c679178c93dc0e903d4bfd0a29cef: Status 404 returned error can't find the container with id 22b4ce867fbfa920f0ba8017ebbed0e7f18c679178c93dc0e903d4bfd0a29cef Nov 25 07:03:38 crc kubenswrapper[4482]: I1125 07:03:38.664555 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fvmzq"] Nov 25 07:03:38 crc kubenswrapper[4482]: E1125 07:03:38.724692 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8499e8b_fa10_4f7c_99bb_7eb09c1ad2c0.slice/crio-conmon-d319bd6243db1ab6315e7d46fc566168b5ec6feabb196f82187025ad9cd4cc34.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8499e8b_fa10_4f7c_99bb_7eb09c1ad2c0.slice/crio-d319bd6243db1ab6315e7d46fc566168b5ec6feabb196f82187025ad9cd4cc34.scope\": RecentStats: unable to find data in memory cache]" Nov 25 07:03:38 crc kubenswrapper[4482]: I1125 07:03:38.794870 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e8cc-account-create-hf9xd"] Nov 25 07:03:38 crc kubenswrapper[4482]: I1125 07:03:38.930233 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-tgdj7"] Nov 25 07:03:39 crc kubenswrapper[4482]: W1125 07:03:39.031991 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod754089e8_09b2_44ad_bdf7_ac4bb4871f3b.slice/crio-77c61bdfd2a273bd238e9339f0a9e2b0f7e504d6002cd7454109404133af6561 WatchSource:0}: Error finding container 77c61bdfd2a273bd238e9339f0a9e2b0f7e504d6002cd7454109404133af6561: Status 404 returned error can't find the container with id 77c61bdfd2a273bd238e9339f0a9e2b0f7e504d6002cd7454109404133af6561 Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.090944 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-67b6c48dd9-tnxmm"] Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.103120 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.109691 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.109860 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.109977 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.134941 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tgdj7" event={"ID":"754089e8-09b2-44ad-bdf7-ac4bb4871f3b","Type":"ContainerStarted","Data":"77c61bdfd2a273bd238e9339f0a9e2b0f7e504d6002cd7454109404133af6561"} Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.151643 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-383a-account-create-6z5m7"] Nov 25 07:03:39 crc kubenswrapper[4482]: W1125 07:03:39.167239 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d3410b4_a318_4018_85b3_1447b61ae0e5.slice/crio-8b1487212eaeee596a3dfd919a0bf926d95a0db75d0aaffb5b96441557d3f808 WatchSource:0}: Error finding container 8b1487212eaeee596a3dfd919a0bf926d95a0db75d0aaffb5b96441557d3f808: Status 404 returned error can't find the container with id 8b1487212eaeee596a3dfd919a0bf926d95a0db75d0aaffb5b96441557d3f808 Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.184233 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67b6c48dd9-tnxmm"] Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.188810 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dec4187-62bd-4d10-b2f6-5888767b2b28-run-httpd\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.188847 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dec4187-62bd-4d10-b2f6-5888767b2b28-public-tls-certs\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.188919 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dec4187-62bd-4d10-b2f6-5888767b2b28-log-httpd\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.188965 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dec4187-62bd-4d10-b2f6-5888767b2b28-combined-ca-bundle\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.189010 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dec4187-62bd-4d10-b2f6-5888767b2b28-internal-tls-certs\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.189071 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8dec4187-62bd-4d10-b2f6-5888767b2b28-etc-swift\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.189144 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dec4187-62bd-4d10-b2f6-5888767b2b28-config-data\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.189217 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlfg9\" (UniqueName: \"kubernetes.io/projected/8dec4187-62bd-4d10-b2f6-5888767b2b28-kube-api-access-zlfg9\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.217365 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-83e4-account-create-r6m84"] Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.237151 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fvmzq" event={"ID":"cae725f0-8063-4795-bbee-c00ee44a38b8","Type":"ContainerStarted","Data":"0011f5b3ea386ab1a951a9fbf9cb057628cb2ff71bdcdc06c6c96a80f3a91e09"} Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.265192 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-fvmzq" podStartSLOduration=2.265178386 podStartE2EDuration="2.265178386s" podCreationTimestamp="2025-11-25 07:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:03:39.262847782 +0000 UTC m=+993.751079031" watchObservedRunningTime="2025-11-25 07:03:39.265178386 +0000 UTC m=+993.753409635" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.266816 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jcvz8" event={"ID":"d4820888-8372-4ac2-b8bd-f6d5f1f64770","Type":"ContainerStarted","Data":"22b4ce867fbfa920f0ba8017ebbed0e7f18c679178c93dc0e903d4bfd0a29cef"} Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.292946 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e8cc-account-create-hf9xd" event={"ID":"e617f2ae-a16c-405e-b79a-5331a8884588","Type":"ContainerStarted","Data":"eaba503bd9c25e831eca31dbf88792fdf2d9846c0e70fd1739d260d8b4a57e4a"} Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.295712 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dec4187-62bd-4d10-b2f6-5888767b2b28-internal-tls-certs\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.295789 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8dec4187-62bd-4d10-b2f6-5888767b2b28-etc-swift\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.295858 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dec4187-62bd-4d10-b2f6-5888767b2b28-config-data\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.295926 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlfg9\" (UniqueName: \"kubernetes.io/projected/8dec4187-62bd-4d10-b2f6-5888767b2b28-kube-api-access-zlfg9\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.295979 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dec4187-62bd-4d10-b2f6-5888767b2b28-run-httpd\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.296001 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dec4187-62bd-4d10-b2f6-5888767b2b28-public-tls-certs\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.296057 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dec4187-62bd-4d10-b2f6-5888767b2b28-log-httpd\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.296093 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dec4187-62bd-4d10-b2f6-5888767b2b28-combined-ca-bundle\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.297730 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dec4187-62bd-4d10-b2f6-5888767b2b28-run-httpd\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.310304 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8dec4187-62bd-4d10-b2f6-5888767b2b28-log-httpd\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.317727 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dec4187-62bd-4d10-b2f6-5888767b2b28-config-data\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.341158 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8dec4187-62bd-4d10-b2f6-5888767b2b28-etc-swift\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.345969 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dec4187-62bd-4d10-b2f6-5888767b2b28-combined-ca-bundle\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.347614 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dec4187-62bd-4d10-b2f6-5888767b2b28-public-tls-certs\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.348309 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dec4187-62bd-4d10-b2f6-5888767b2b28-internal-tls-certs\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.361238 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlfg9\" (UniqueName: \"kubernetes.io/projected/8dec4187-62bd-4d10-b2f6-5888767b2b28-kube-api-access-zlfg9\") pod \"swift-proxy-67b6c48dd9-tnxmm\" (UID: \"8dec4187-62bd-4d10-b2f6-5888767b2b28\") " pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.369287 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698b5d6cf7-cn5k5" podUID="9ed040b0-24c3-4b02-aefb-a7eaced9d994" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.381952 4482 generic.go:334] "Generic (PLEG): container finished" podID="d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" containerID="d319bd6243db1ab6315e7d46fc566168b5ec6feabb196f82187025ad9cd4cc34" exitCode=0 Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.381997 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7954648f5b-fkx6n" event={"ID":"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0","Type":"ContainerDied","Data":"d319bd6243db1ab6315e7d46fc566168b5ec6feabb196f82187025ad9cd4cc34"} Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.452031 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.603159 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.615299 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfg82\" (UniqueName: \"kubernetes.io/projected/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-kube-api-access-mfg82\") pod \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.615380 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-httpd-config\") pod \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.615437 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-combined-ca-bundle\") pod \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.615507 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-config\") pod \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.615620 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-ovndb-tls-certs\") pod \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\" (UID: \"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0\") " Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.638920 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" (UID: "d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.638934 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-kube-api-access-mfg82" (OuterVolumeSpecName: "kube-api-access-mfg82") pod "d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" (UID: "d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0"). InnerVolumeSpecName "kube-api-access-mfg82". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.716972 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfg82\" (UniqueName: \"kubernetes.io/projected/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-kube-api-access-mfg82\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.717009 4482 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:39 crc kubenswrapper[4482]: I1125 07:03:39.998435 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" (UID: "d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.019455 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-config" (OuterVolumeSpecName: "config") pod "d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" (UID: "d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.059681 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.060493 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.082476 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" (UID: "d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.164443 4482 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.372841 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67b6c48dd9-tnxmm"] Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.421332 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83e4-account-create-r6m84" event={"ID":"b82aeaff-100d-45a9-9694-aae65838cf91","Type":"ContainerStarted","Data":"ecfb935954d7f35df84fb2d75a72c26d682dbb01556014ff0bb4fc0c078e5b54"} Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.421387 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83e4-account-create-r6m84" event={"ID":"b82aeaff-100d-45a9-9694-aae65838cf91","Type":"ContainerStarted","Data":"67d63ef33e1e3d415a150a0d4e3e7bcc76803abefb87581ce1542278404c7783"} Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.436001 4482 generic.go:334] "Generic (PLEG): container finished" podID="754089e8-09b2-44ad-bdf7-ac4bb4871f3b" containerID="600c24fde43f2c0c7db39eb1dc22497621eed85903b4c1e91be358b9aa5ce530" exitCode=0 Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.436083 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tgdj7" event={"ID":"754089e8-09b2-44ad-bdf7-ac4bb4871f3b","Type":"ContainerDied","Data":"600c24fde43f2c0c7db39eb1dc22497621eed85903b4c1e91be358b9aa5ce530"} Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.448581 4482 generic.go:334] "Generic (PLEG): container finished" podID="cae725f0-8063-4795-bbee-c00ee44a38b8" containerID="f09432ab721ccc06512c1952caab538dd6b44d2f9db2c84dc8963627e0347838" exitCode=0 Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.448703 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fvmzq" event={"ID":"cae725f0-8063-4795-bbee-c00ee44a38b8","Type":"ContainerDied","Data":"f09432ab721ccc06512c1952caab538dd6b44d2f9db2c84dc8963627e0347838"} Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.468216 4482 generic.go:334] "Generic (PLEG): container finished" podID="d4820888-8372-4ac2-b8bd-f6d5f1f64770" containerID="99022cbfd793bfa719f0c1456d3ae613406fde7cf1e69c24d5ca9bccaec27df7" exitCode=0 Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.468327 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jcvz8" event={"ID":"d4820888-8372-4ac2-b8bd-f6d5f1f64770","Type":"ContainerDied","Data":"99022cbfd793bfa719f0c1456d3ae613406fde7cf1e69c24d5ca9bccaec27df7"} Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.469149 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-83e4-account-create-r6m84" podStartSLOduration=3.469136185 podStartE2EDuration="3.469136185s" podCreationTimestamp="2025-11-25 07:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:03:40.458664043 +0000 UTC m=+994.946895302" watchObservedRunningTime="2025-11-25 07:03:40.469136185 +0000 UTC m=+994.957367444" Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.470325 4482 generic.go:334] "Generic (PLEG): container finished" podID="e617f2ae-a16c-405e-b79a-5331a8884588" containerID="3084593537a769be11003e4b88b0d06a1b8d11262a8479edbeedc721630daba5" exitCode=0 Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.470381 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e8cc-account-create-hf9xd" event={"ID":"e617f2ae-a16c-405e-b79a-5331a8884588","Type":"ContainerDied","Data":"3084593537a769be11003e4b88b0d06a1b8d11262a8479edbeedc721630daba5"} Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.481720 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7954648f5b-fkx6n" event={"ID":"d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0","Type":"ContainerDied","Data":"aefe2f69641d703adc6819c0cdba4b2686ef574533606eabe12f4f8345fe9229"} Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.481771 4482 scope.go:117] "RemoveContainer" containerID="4b4d254252b63fc75295f16e2baa703a7e8aa76b21e18ddacbfd21e58cc389b7" Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.481833 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7954648f5b-fkx6n" Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.517264 4482 generic.go:334] "Generic (PLEG): container finished" podID="1d3410b4-a318-4018-85b3-1447b61ae0e5" containerID="3c0a229f14073f5031de207333fcb4f1c7c0a21bc2d23910985df8156c18fa4e" exitCode=0 Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.517308 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-383a-account-create-6z5m7" event={"ID":"1d3410b4-a318-4018-85b3-1447b61ae0e5","Type":"ContainerDied","Data":"3c0a229f14073f5031de207333fcb4f1c7c0a21bc2d23910985df8156c18fa4e"} Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.517333 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-383a-account-create-6z5m7" event={"ID":"1d3410b4-a318-4018-85b3-1447b61ae0e5","Type":"ContainerStarted","Data":"8b1487212eaeee596a3dfd919a0bf926d95a0db75d0aaffb5b96441557d3f808"} Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.586710 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7954648f5b-fkx6n"] Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.596070 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7954648f5b-fkx6n"] Nov 25 07:03:40 crc kubenswrapper[4482]: I1125 07:03:40.610719 4482 scope.go:117] "RemoveContainer" containerID="d319bd6243db1ab6315e7d46fc566168b5ec6feabb196f82187025ad9cd4cc34" Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.347334 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.542515 4482 generic.go:334] "Generic (PLEG): container finished" podID="b82aeaff-100d-45a9-9694-aae65838cf91" containerID="ecfb935954d7f35df84fb2d75a72c26d682dbb01556014ff0bb4fc0c078e5b54" exitCode=0 Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.542581 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83e4-account-create-r6m84" event={"ID":"b82aeaff-100d-45a9-9694-aae65838cf91","Type":"ContainerDied","Data":"ecfb935954d7f35df84fb2d75a72c26d682dbb01556014ff0bb4fc0c078e5b54"} Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.554159 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67b6c48dd9-tnxmm" event={"ID":"8dec4187-62bd-4d10-b2f6-5888767b2b28","Type":"ContainerStarted","Data":"401d7b82f505367e134a5506e104fab16db8945f6efd6140a9380b4d5af6141d"} Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.554209 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67b6c48dd9-tnxmm" event={"ID":"8dec4187-62bd-4d10-b2f6-5888767b2b28","Type":"ContainerStarted","Data":"1405518c7b6af59bff0a88fb2197f960ec7efb7cd23e2739355c0cb47aa49079"} Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.554220 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67b6c48dd9-tnxmm" event={"ID":"8dec4187-62bd-4d10-b2f6-5888767b2b28","Type":"ContainerStarted","Data":"956a0c928eb9f9c7c1691928859fb47b527767175f3115efcbcafbfcbf5850f5"} Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.555045 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.555085 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.594273 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.594655 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.852935 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" path="/var/lib/kubelet/pods/d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0/volumes" Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.859496 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:03:41 crc kubenswrapper[4482]: I1125 07:03:41.860322 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.158095 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fvmzq" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.191623 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-67b6c48dd9-tnxmm" podStartSLOduration=3.1915967419999998 podStartE2EDuration="3.191596742s" podCreationTimestamp="2025-11-25 07:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:03:41.588352542 +0000 UTC m=+996.076583801" watchObservedRunningTime="2025-11-25 07:03:42.191596742 +0000 UTC m=+996.679827992" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.222531 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.223588 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae725f0-8063-4795-bbee-c00ee44a38b8-operator-scripts\") pod \"cae725f0-8063-4795-bbee-c00ee44a38b8\" (UID: \"cae725f0-8063-4795-bbee-c00ee44a38b8\") " Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.223988 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmt4x\" (UniqueName: \"kubernetes.io/projected/cae725f0-8063-4795-bbee-c00ee44a38b8-kube-api-access-xmt4x\") pod \"cae725f0-8063-4795-bbee-c00ee44a38b8\" (UID: \"cae725f0-8063-4795-bbee-c00ee44a38b8\") " Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.226019 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cae725f0-8063-4795-bbee-c00ee44a38b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cae725f0-8063-4795-bbee-c00ee44a38b8" (UID: "cae725f0-8063-4795-bbee-c00ee44a38b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.236904 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae725f0-8063-4795-bbee-c00ee44a38b8-kube-api-access-xmt4x" (OuterVolumeSpecName: "kube-api-access-xmt4x") pod "cae725f0-8063-4795-bbee-c00ee44a38b8" (UID: "cae725f0-8063-4795-bbee-c00ee44a38b8"). InnerVolumeSpecName "kube-api-access-xmt4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.326313 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae725f0-8063-4795-bbee-c00ee44a38b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.326348 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmt4x\" (UniqueName: \"kubernetes.io/projected/cae725f0-8063-4795-bbee-c00ee44a38b8-kube-api-access-xmt4x\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.350965 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-383a-account-create-6z5m7" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.430302 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3410b4-a318-4018-85b3-1447b61ae0e5-operator-scripts\") pod \"1d3410b4-a318-4018-85b3-1447b61ae0e5\" (UID: \"1d3410b4-a318-4018-85b3-1447b61ae0e5\") " Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.430398 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m8jv\" (UniqueName: \"kubernetes.io/projected/1d3410b4-a318-4018-85b3-1447b61ae0e5-kube-api-access-6m8jv\") pod \"1d3410b4-a318-4018-85b3-1447b61ae0e5\" (UID: \"1d3410b4-a318-4018-85b3-1447b61ae0e5\") " Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.431401 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3410b4-a318-4018-85b3-1447b61ae0e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d3410b4-a318-4018-85b3-1447b61ae0e5" (UID: "1d3410b4-a318-4018-85b3-1447b61ae0e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.454472 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3410b4-a318-4018-85b3-1447b61ae0e5-kube-api-access-6m8jv" (OuterVolumeSpecName: "kube-api-access-6m8jv") pod "1d3410b4-a318-4018-85b3-1447b61ae0e5" (UID: "1d3410b4-a318-4018-85b3-1447b61ae0e5"). InnerVolumeSpecName "kube-api-access-6m8jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.547731 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m8jv\" (UniqueName: \"kubernetes.io/projected/1d3410b4-a318-4018-85b3-1447b61ae0e5-kube-api-access-6m8jv\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.548036 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3410b4-a318-4018-85b3-1447b61ae0e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.601309 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tgdj7" event={"ID":"754089e8-09b2-44ad-bdf7-ac4bb4871f3b","Type":"ContainerDied","Data":"77c61bdfd2a273bd238e9339f0a9e2b0f7e504d6002cd7454109404133af6561"} Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.601352 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77c61bdfd2a273bd238e9339f0a9e2b0f7e504d6002cd7454109404133af6561" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.604294 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fvmzq" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.604294 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fvmzq" event={"ID":"cae725f0-8063-4795-bbee-c00ee44a38b8","Type":"ContainerDied","Data":"0011f5b3ea386ab1a951a9fbf9cb057628cb2ff71bdcdc06c6c96a80f3a91e09"} Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.604337 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0011f5b3ea386ab1a951a9fbf9cb057628cb2ff71bdcdc06c6c96a80f3a91e09" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.612808 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tgdj7" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.613389 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jcvz8" event={"ID":"d4820888-8372-4ac2-b8bd-f6d5f1f64770","Type":"ContainerDied","Data":"22b4ce867fbfa920f0ba8017ebbed0e7f18c679178c93dc0e903d4bfd0a29cef"} Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.613429 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22b4ce867fbfa920f0ba8017ebbed0e7f18c679178c93dc0e903d4bfd0a29cef" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.617087 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e8cc-account-create-hf9xd" event={"ID":"e617f2ae-a16c-405e-b79a-5331a8884588","Type":"ContainerDied","Data":"eaba503bd9c25e831eca31dbf88792fdf2d9846c0e70fd1739d260d8b4a57e4a"} Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.617113 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaba503bd9c25e831eca31dbf88792fdf2d9846c0e70fd1739d260d8b4a57e4a" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.617254 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e8cc-account-create-hf9xd" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.617302 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jcvz8" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.632368 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-383a-account-create-6z5m7" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.636999 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-383a-account-create-6z5m7" event={"ID":"1d3410b4-a318-4018-85b3-1447b61ae0e5","Type":"ContainerDied","Data":"8b1487212eaeee596a3dfd919a0bf926d95a0db75d0aaffb5b96441557d3f808"} Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.637041 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b1487212eaeee596a3dfd919a0bf926d95a0db75d0aaffb5b96441557d3f808" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.650504 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/754089e8-09b2-44ad-bdf7-ac4bb4871f3b-operator-scripts\") pod \"754089e8-09b2-44ad-bdf7-ac4bb4871f3b\" (UID: \"754089e8-09b2-44ad-bdf7-ac4bb4871f3b\") " Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.650694 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrsnq\" (UniqueName: \"kubernetes.io/projected/754089e8-09b2-44ad-bdf7-ac4bb4871f3b-kube-api-access-hrsnq\") pod \"754089e8-09b2-44ad-bdf7-ac4bb4871f3b\" (UID: \"754089e8-09b2-44ad-bdf7-ac4bb4871f3b\") " Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.670430 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754089e8-09b2-44ad-bdf7-ac4bb4871f3b-kube-api-access-hrsnq" (OuterVolumeSpecName: "kube-api-access-hrsnq") pod "754089e8-09b2-44ad-bdf7-ac4bb4871f3b" (UID: "754089e8-09b2-44ad-bdf7-ac4bb4871f3b"). InnerVolumeSpecName "kube-api-access-hrsnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.670745 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754089e8-09b2-44ad-bdf7-ac4bb4871f3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "754089e8-09b2-44ad-bdf7-ac4bb4871f3b" (UID: "754089e8-09b2-44ad-bdf7-ac4bb4871f3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.753025 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e617f2ae-a16c-405e-b79a-5331a8884588-operator-scripts\") pod \"e617f2ae-a16c-405e-b79a-5331a8884588\" (UID: \"e617f2ae-a16c-405e-b79a-5331a8884588\") " Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.753350 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69v9c\" (UniqueName: \"kubernetes.io/projected/d4820888-8372-4ac2-b8bd-f6d5f1f64770-kube-api-access-69v9c\") pod \"d4820888-8372-4ac2-b8bd-f6d5f1f64770\" (UID: \"d4820888-8372-4ac2-b8bd-f6d5f1f64770\") " Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.753463 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4820888-8372-4ac2-b8bd-f6d5f1f64770-operator-scripts\") pod \"d4820888-8372-4ac2-b8bd-f6d5f1f64770\" (UID: \"d4820888-8372-4ac2-b8bd-f6d5f1f64770\") " Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.753556 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxjxk\" (UniqueName: \"kubernetes.io/projected/e617f2ae-a16c-405e-b79a-5331a8884588-kube-api-access-pxjxk\") pod \"e617f2ae-a16c-405e-b79a-5331a8884588\" (UID: \"e617f2ae-a16c-405e-b79a-5331a8884588\") " Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.754874 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrsnq\" (UniqueName: \"kubernetes.io/projected/754089e8-09b2-44ad-bdf7-ac4bb4871f3b-kube-api-access-hrsnq\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.754998 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/754089e8-09b2-44ad-bdf7-ac4bb4871f3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.753578 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e617f2ae-a16c-405e-b79a-5331a8884588-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e617f2ae-a16c-405e-b79a-5331a8884588" (UID: "e617f2ae-a16c-405e-b79a-5331a8884588"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.754514 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4820888-8372-4ac2-b8bd-f6d5f1f64770-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4820888-8372-4ac2-b8bd-f6d5f1f64770" (UID: "d4820888-8372-4ac2-b8bd-f6d5f1f64770"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.763821 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e617f2ae-a16c-405e-b79a-5331a8884588-kube-api-access-pxjxk" (OuterVolumeSpecName: "kube-api-access-pxjxk") pod "e617f2ae-a16c-405e-b79a-5331a8884588" (UID: "e617f2ae-a16c-405e-b79a-5331a8884588"). InnerVolumeSpecName "kube-api-access-pxjxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.780545 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4820888-8372-4ac2-b8bd-f6d5f1f64770-kube-api-access-69v9c" (OuterVolumeSpecName: "kube-api-access-69v9c") pod "d4820888-8372-4ac2-b8bd-f6d5f1f64770" (UID: "d4820888-8372-4ac2-b8bd-f6d5f1f64770"). InnerVolumeSpecName "kube-api-access-69v9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.857740 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e617f2ae-a16c-405e-b79a-5331a8884588-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.857775 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69v9c\" (UniqueName: \"kubernetes.io/projected/d4820888-8372-4ac2-b8bd-f6d5f1f64770-kube-api-access-69v9c\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.857788 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4820888-8372-4ac2-b8bd-f6d5f1f64770-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:42 crc kubenswrapper[4482]: I1125 07:03:42.857796 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxjxk\" (UniqueName: \"kubernetes.io/projected/e617f2ae-a16c-405e-b79a-5331a8884588-kube-api-access-pxjxk\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.012700 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83e4-account-create-r6m84" Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.062899 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82aeaff-100d-45a9-9694-aae65838cf91-operator-scripts\") pod \"b82aeaff-100d-45a9-9694-aae65838cf91\" (UID: \"b82aeaff-100d-45a9-9694-aae65838cf91\") " Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.062976 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sjnb\" (UniqueName: \"kubernetes.io/projected/b82aeaff-100d-45a9-9694-aae65838cf91-kube-api-access-5sjnb\") pod \"b82aeaff-100d-45a9-9694-aae65838cf91\" (UID: \"b82aeaff-100d-45a9-9694-aae65838cf91\") " Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.063547 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82aeaff-100d-45a9-9694-aae65838cf91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b82aeaff-100d-45a9-9694-aae65838cf91" (UID: "b82aeaff-100d-45a9-9694-aae65838cf91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.068932 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82aeaff-100d-45a9-9694-aae65838cf91-kube-api-access-5sjnb" (OuterVolumeSpecName: "kube-api-access-5sjnb") pod "b82aeaff-100d-45a9-9694-aae65838cf91" (UID: "b82aeaff-100d-45a9-9694-aae65838cf91"). InnerVolumeSpecName "kube-api-access-5sjnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.165051 4482 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82aeaff-100d-45a9-9694-aae65838cf91-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.165087 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sjnb\" (UniqueName: \"kubernetes.io/projected/b82aeaff-100d-45a9-9694-aae65838cf91-kube-api-access-5sjnb\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.649144 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tgdj7" Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.656181 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83e4-account-create-r6m84" Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.661304 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e8cc-account-create-hf9xd" Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.661462 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jcvz8" Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.661971 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83e4-account-create-r6m84" event={"ID":"b82aeaff-100d-45a9-9694-aae65838cf91","Type":"ContainerDied","Data":"67d63ef33e1e3d415a150a0d4e3e7bcc76803abefb87581ce1542278404c7783"} Nov 25 07:03:43 crc kubenswrapper[4482]: I1125 07:03:43.662043 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67d63ef33e1e3d415a150a0d4e3e7bcc76803abefb87581ce1542278404c7783" Nov 25 07:03:44 crc kubenswrapper[4482]: I1125 07:03:44.637076 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677029 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cfr4t"] Nov 25 07:03:47 crc kubenswrapper[4482]: E1125 07:03:47.677662 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82aeaff-100d-45a9-9694-aae65838cf91" containerName="mariadb-account-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677675 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82aeaff-100d-45a9-9694-aae65838cf91" containerName="mariadb-account-create" Nov 25 07:03:47 crc kubenswrapper[4482]: E1125 07:03:47.677687 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754089e8-09b2-44ad-bdf7-ac4bb4871f3b" containerName="mariadb-database-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677694 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="754089e8-09b2-44ad-bdf7-ac4bb4871f3b" containerName="mariadb-database-create" Nov 25 07:03:47 crc kubenswrapper[4482]: E1125 07:03:47.677724 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" containerName="neutron-api" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677731 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" containerName="neutron-api" Nov 25 07:03:47 crc kubenswrapper[4482]: E1125 07:03:47.677740 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" containerName="neutron-httpd" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677750 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" containerName="neutron-httpd" Nov 25 07:03:47 crc kubenswrapper[4482]: E1125 07:03:47.677758 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e617f2ae-a16c-405e-b79a-5331a8884588" containerName="mariadb-account-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677764 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="e617f2ae-a16c-405e-b79a-5331a8884588" containerName="mariadb-account-create" Nov 25 07:03:47 crc kubenswrapper[4482]: E1125 07:03:47.677776 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4820888-8372-4ac2-b8bd-f6d5f1f64770" containerName="mariadb-database-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677782 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4820888-8372-4ac2-b8bd-f6d5f1f64770" containerName="mariadb-database-create" Nov 25 07:03:47 crc kubenswrapper[4482]: E1125 07:03:47.677788 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3410b4-a318-4018-85b3-1447b61ae0e5" containerName="mariadb-account-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677794 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3410b4-a318-4018-85b3-1447b61ae0e5" containerName="mariadb-account-create" Nov 25 07:03:47 crc kubenswrapper[4482]: E1125 07:03:47.677801 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae725f0-8063-4795-bbee-c00ee44a38b8" containerName="mariadb-database-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677806 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae725f0-8063-4795-bbee-c00ee44a38b8" containerName="mariadb-database-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677959 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" containerName="neutron-httpd" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677972 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="754089e8-09b2-44ad-bdf7-ac4bb4871f3b" containerName="mariadb-database-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677980 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="e617f2ae-a16c-405e-b79a-5331a8884588" containerName="mariadb-account-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677987 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82aeaff-100d-45a9-9694-aae65838cf91" containerName="mariadb-account-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.677997 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3410b4-a318-4018-85b3-1447b61ae0e5" containerName="mariadb-account-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.678007 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae725f0-8063-4795-bbee-c00ee44a38b8" containerName="mariadb-database-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.678016 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4820888-8372-4ac2-b8bd-f6d5f1f64770" containerName="mariadb-database-create" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.678025 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8499e8b-fa10-4f7c-99bb-7eb09c1ad2c0" containerName="neutron-api" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.678640 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.681440 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.681729 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2s7cr" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.684952 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.687105 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cfr4t"] Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.770245 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-config-data\") pod \"nova-cell0-conductor-db-sync-cfr4t\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.770703 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srcz5\" (UniqueName: \"kubernetes.io/projected/cda0ef98-7b63-4531-8655-a537323394a7-kube-api-access-srcz5\") pod \"nova-cell0-conductor-db-sync-cfr4t\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.770772 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-scripts\") pod \"nova-cell0-conductor-db-sync-cfr4t\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.770791 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cfr4t\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.873186 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-config-data\") pod \"nova-cell0-conductor-db-sync-cfr4t\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.873333 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srcz5\" (UniqueName: \"kubernetes.io/projected/cda0ef98-7b63-4531-8655-a537323394a7-kube-api-access-srcz5\") pod \"nova-cell0-conductor-db-sync-cfr4t\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.873530 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-scripts\") pod \"nova-cell0-conductor-db-sync-cfr4t\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.873556 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cfr4t\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.904827 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-scripts\") pod \"nova-cell0-conductor-db-sync-cfr4t\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.905771 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-config-data\") pod \"nova-cell0-conductor-db-sync-cfr4t\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.910288 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cfr4t\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.915145 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srcz5\" (UniqueName: \"kubernetes.io/projected/cda0ef98-7b63-4531-8655-a537323394a7-kube-api-access-srcz5\") pod \"nova-cell0-conductor-db-sync-cfr4t\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:47 crc kubenswrapper[4482]: I1125 07:03:47.997275 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:03:49 crc kubenswrapper[4482]: I1125 07:03:49.478049 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:49 crc kubenswrapper[4482]: I1125 07:03:49.480953 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67b6c48dd9-tnxmm" Nov 25 07:03:51 crc kubenswrapper[4482]: I1125 07:03:51.597298 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fbb9df54d-nfljm" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Nov 25 07:03:51 crc kubenswrapper[4482]: I1125 07:03:51.861483 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7949b4656d-jjsj8" podUID="e5634033-0ed5-4a52-9d37-a52ce07e4f50" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Nov 25 07:03:53 crc kubenswrapper[4482]: I1125 07:03:53.301103 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cfr4t"] Nov 25 07:03:53 crc kubenswrapper[4482]: I1125 07:03:53.815261 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cfr4t" event={"ID":"cda0ef98-7b63-4531-8655-a537323394a7","Type":"ContainerStarted","Data":"d6e3a1f065cda323b649f532781cba6ed4f370e75e9b0319e9e4d87617a6c8fd"} Nov 25 07:03:53 crc kubenswrapper[4482]: I1125 07:03:53.820004 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2c0ac8f-2b76-45a3-af85-5990913bc03a","Type":"ContainerStarted","Data":"ce606b7d230b3f87476793909ca2a8a5c173cab166c165a1e8d4d5669eabb34e"} Nov 25 07:03:53 crc kubenswrapper[4482]: I1125 07:03:53.823119 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z8dgz" event={"ID":"6d25c491-a613-4f52-8cb8-95d689bc3000","Type":"ContainerStarted","Data":"04f6ff398a11bfd652274cebdd4ffdf94adc2c7d0c955e6fad0b0ad02da6d9f4"} Nov 25 07:03:53 crc kubenswrapper[4482]: I1125 07:03:53.825919 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-v2dqt" event={"ID":"3e50321d-a59a-4d39-a485-4299ced13bdc","Type":"ContainerStarted","Data":"04077ac8ba14602fb15a0e04ff6d652d77f49c8803782427758af7b08b69a4a7"} Nov 25 07:03:53 crc kubenswrapper[4482]: I1125 07:03:53.849680 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-z8dgz" podStartSLOduration=2.188562965 podStartE2EDuration="1m54.849669844s" podCreationTimestamp="2025-11-25 07:01:59 +0000 UTC" firstStartedPulling="2025-11-25 07:02:00.294007049 +0000 UTC m=+894.782238307" lastFinishedPulling="2025-11-25 07:03:52.955113927 +0000 UTC m=+1007.443345186" observedRunningTime="2025-11-25 07:03:53.843001013 +0000 UTC m=+1008.331232272" watchObservedRunningTime="2025-11-25 07:03:53.849669844 +0000 UTC m=+1008.337901103" Nov 25 07:03:53 crc kubenswrapper[4482]: I1125 07:03:53.859639 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-v2dqt" podStartSLOduration=3.8636840059999997 podStartE2EDuration="1m13.859607318s" podCreationTimestamp="2025-11-25 07:02:40 +0000 UTC" firstStartedPulling="2025-11-25 07:02:42.82474844 +0000 UTC m=+937.312979699" lastFinishedPulling="2025-11-25 07:03:52.820671753 +0000 UTC m=+1007.308903011" observedRunningTime="2025-11-25 07:03:53.859181455 +0000 UTC m=+1008.347412714" watchObservedRunningTime="2025-11-25 07:03:53.859607318 +0000 UTC m=+1008.347838577" Nov 25 07:03:57 crc kubenswrapper[4482]: I1125 07:03:57.895525 4482 generic.go:334] "Generic (PLEG): container finished" podID="3e50321d-a59a-4d39-a485-4299ced13bdc" containerID="04077ac8ba14602fb15a0e04ff6d652d77f49c8803782427758af7b08b69a4a7" exitCode=0 Nov 25 07:03:57 crc kubenswrapper[4482]: I1125 07:03:57.895615 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-v2dqt" event={"ID":"3e50321d-a59a-4d39-a485-4299ced13bdc","Type":"ContainerDied","Data":"04077ac8ba14602fb15a0e04ff6d652d77f49c8803782427758af7b08b69a4a7"} Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.307616 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-v2dqt" Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.399871 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e50321d-a59a-4d39-a485-4299ced13bdc-config-data\") pod \"3e50321d-a59a-4d39-a485-4299ced13bdc\" (UID: \"3e50321d-a59a-4d39-a485-4299ced13bdc\") " Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.400025 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jcz5\" (UniqueName: \"kubernetes.io/projected/3e50321d-a59a-4d39-a485-4299ced13bdc-kube-api-access-6jcz5\") pod \"3e50321d-a59a-4d39-a485-4299ced13bdc\" (UID: \"3e50321d-a59a-4d39-a485-4299ced13bdc\") " Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.400050 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e50321d-a59a-4d39-a485-4299ced13bdc-combined-ca-bundle\") pod \"3e50321d-a59a-4d39-a485-4299ced13bdc\" (UID: \"3e50321d-a59a-4d39-a485-4299ced13bdc\") " Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.422828 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e50321d-a59a-4d39-a485-4299ced13bdc-kube-api-access-6jcz5" (OuterVolumeSpecName: "kube-api-access-6jcz5") pod "3e50321d-a59a-4d39-a485-4299ced13bdc" (UID: "3e50321d-a59a-4d39-a485-4299ced13bdc"). InnerVolumeSpecName "kube-api-access-6jcz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.438282 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e50321d-a59a-4d39-a485-4299ced13bdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e50321d-a59a-4d39-a485-4299ced13bdc" (UID: "3e50321d-a59a-4d39-a485-4299ced13bdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.502765 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jcz5\" (UniqueName: \"kubernetes.io/projected/3e50321d-a59a-4d39-a485-4299ced13bdc-kube-api-access-6jcz5\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.502794 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e50321d-a59a-4d39-a485-4299ced13bdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.517528 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e50321d-a59a-4d39-a485-4299ced13bdc-config-data" (OuterVolumeSpecName: "config-data") pod "3e50321d-a59a-4d39-a485-4299ced13bdc" (UID: "3e50321d-a59a-4d39-a485-4299ced13bdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.604761 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e50321d-a59a-4d39-a485-4299ced13bdc-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.932248 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-v2dqt" Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.932855 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-v2dqt" event={"ID":"3e50321d-a59a-4d39-a485-4299ced13bdc","Type":"ContainerDied","Data":"293862be2d07354e45d1c5f184705d0f03baf24cace4ea5e457140020cc76a92"} Nov 25 07:03:59 crc kubenswrapper[4482]: I1125 07:03:59.932921 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="293862be2d07354e45d1c5f184705d0f03baf24cace4ea5e457140020cc76a92" Nov 25 07:04:01 crc kubenswrapper[4482]: I1125 07:04:01.595005 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fbb9df54d-nfljm" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Nov 25 07:04:01 crc kubenswrapper[4482]: I1125 07:04:01.858957 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7949b4656d-jjsj8" podUID="e5634033-0ed5-4a52-9d37-a52ce07e4f50" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Nov 25 07:04:02 crc kubenswrapper[4482]: I1125 07:04:02.997746 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z8dgz" event={"ID":"6d25c491-a613-4f52-8cb8-95d689bc3000","Type":"ContainerDied","Data":"04f6ff398a11bfd652274cebdd4ffdf94adc2c7d0c955e6fad0b0ad02da6d9f4"} Nov 25 07:04:02 crc kubenswrapper[4482]: I1125 07:04:02.997692 4482 generic.go:334] "Generic (PLEG): container finished" podID="6d25c491-a613-4f52-8cb8-95d689bc3000" containerID="04f6ff398a11bfd652274cebdd4ffdf94adc2c7d0c955e6fad0b0ad02da6d9f4" exitCode=0 Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.369955 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6fccbbd848-gp8qx"] Nov 25 07:04:04 crc kubenswrapper[4482]: E1125 07:04:04.371570 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e50321d-a59a-4d39-a485-4299ced13bdc" containerName="heat-db-sync" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.371872 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e50321d-a59a-4d39-a485-4299ced13bdc" containerName="heat-db-sync" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.372140 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e50321d-a59a-4d39-a485-4299ced13bdc" containerName="heat-db-sync" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.372895 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.380554 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.380754 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-ngzzq" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.380944 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.414793 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6fccbbd848-gp8qx"] Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.426449 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-config-data-custom\") pod \"heat-engine-6fccbbd848-gp8qx\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.426513 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5b9v\" (UniqueName: \"kubernetes.io/projected/5bda1dfd-9f8b-4fbd-8093-689b7afada79-kube-api-access-n5b9v\") pod \"heat-engine-6fccbbd848-gp8qx\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.426545 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-combined-ca-bundle\") pod \"heat-engine-6fccbbd848-gp8qx\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.426569 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-config-data\") pod \"heat-engine-6fccbbd848-gp8qx\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.438229 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-759996464c-vrqp9"] Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.439593 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.447212 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-759996464c-vrqp9"] Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.490750 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6f98797bb6-chb76"] Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.492283 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.510597 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.524195 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f98797bb6-chb76"] Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.545075 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-ovsdbserver-sb\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.545218 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-config-data-custom\") pod \"heat-engine-6fccbbd848-gp8qx\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.545297 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5b9v\" (UniqueName: \"kubernetes.io/projected/5bda1dfd-9f8b-4fbd-8093-689b7afada79-kube-api-access-n5b9v\") pod \"heat-engine-6fccbbd848-gp8qx\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.545334 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-combined-ca-bundle\") pod \"heat-engine-6fccbbd848-gp8qx\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.545370 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-config\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.545397 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-config-data\") pod \"heat-engine-6fccbbd848-gp8qx\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.545429 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-ovsdbserver-nb\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.545679 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4w6s\" (UniqueName: \"kubernetes.io/projected/b0810e3e-ce88-42f5-a47d-8e101088577b-kube-api-access-l4w6s\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.545763 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-dns-svc\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.545809 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-dns-swift-storage-0\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.593106 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-config-data-custom\") pod \"heat-engine-6fccbbd848-gp8qx\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.601924 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-combined-ca-bundle\") pod \"heat-engine-6fccbbd848-gp8qx\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.629648 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-config-data\") pod \"heat-engine-6fccbbd848-gp8qx\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.630843 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5b9v\" (UniqueName: \"kubernetes.io/projected/5bda1dfd-9f8b-4fbd-8093-689b7afada79-kube-api-access-n5b9v\") pod \"heat-engine-6fccbbd848-gp8qx\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.653825 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-config-data\") pod \"heat-cfnapi-6f98797bb6-chb76\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.656827 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq2t4\" (UniqueName: \"kubernetes.io/projected/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-kube-api-access-vq2t4\") pod \"heat-cfnapi-6f98797bb6-chb76\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.656887 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4w6s\" (UniqueName: \"kubernetes.io/projected/b0810e3e-ce88-42f5-a47d-8e101088577b-kube-api-access-l4w6s\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.656987 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-dns-svc\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.657040 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-dns-swift-storage-0\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.657124 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-ovsdbserver-sb\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.657237 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-combined-ca-bundle\") pod \"heat-cfnapi-6f98797bb6-chb76\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.657325 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-config\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.657364 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-ovsdbserver-nb\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.657512 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-config-data-custom\") pod \"heat-cfnapi-6f98797bb6-chb76\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.658706 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-dns-svc\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.667742 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-ovsdbserver-sb\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.667818 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-config\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.674485 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-dns-swift-storage-0\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.674631 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-ovsdbserver-nb\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.686316 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4w6s\" (UniqueName: \"kubernetes.io/projected/b0810e3e-ce88-42f5-a47d-8e101088577b-kube-api-access-l4w6s\") pod \"dnsmasq-dns-759996464c-vrqp9\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.715287 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.749970 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-dddd66fdc-jvpm8"] Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.751296 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.753584 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.759853 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-config-data-custom\") pod \"heat-api-dddd66fdc-jvpm8\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.759908 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-config-data-custom\") pod \"heat-cfnapi-6f98797bb6-chb76\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.759934 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-config-data\") pod \"heat-cfnapi-6f98797bb6-chb76\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.759952 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq2t4\" (UniqueName: \"kubernetes.io/projected/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-kube-api-access-vq2t4\") pod \"heat-cfnapi-6f98797bb6-chb76\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.759985 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-combined-ca-bundle\") pod \"heat-api-dddd66fdc-jvpm8\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.760000 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbcjt\" (UniqueName: \"kubernetes.io/projected/f8e32069-3248-4216-a894-0ea4558d88f9-kube-api-access-jbcjt\") pod \"heat-api-dddd66fdc-jvpm8\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.760060 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-combined-ca-bundle\") pod \"heat-cfnapi-6f98797bb6-chb76\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.760095 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-config-data\") pod \"heat-api-dddd66fdc-jvpm8\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.765967 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.801650 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq2t4\" (UniqueName: \"kubernetes.io/projected/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-kube-api-access-vq2t4\") pod \"heat-cfnapi-6f98797bb6-chb76\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.802034 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-combined-ca-bundle\") pod \"heat-cfnapi-6f98797bb6-chb76\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.802450 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-config-data\") pod \"heat-cfnapi-6f98797bb6-chb76\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.802974 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-config-data-custom\") pod \"heat-cfnapi-6f98797bb6-chb76\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.812651 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-dddd66fdc-jvpm8"] Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.838897 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.861731 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-combined-ca-bundle\") pod \"heat-api-dddd66fdc-jvpm8\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.861763 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbcjt\" (UniqueName: \"kubernetes.io/projected/f8e32069-3248-4216-a894-0ea4558d88f9-kube-api-access-jbcjt\") pod \"heat-api-dddd66fdc-jvpm8\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.861880 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-config-data\") pod \"heat-api-dddd66fdc-jvpm8\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.861916 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-config-data-custom\") pod \"heat-api-dddd66fdc-jvpm8\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.874881 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-combined-ca-bundle\") pod \"heat-api-dddd66fdc-jvpm8\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.886978 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-config-data-custom\") pod \"heat-api-dddd66fdc-jvpm8\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.891021 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-config-data\") pod \"heat-api-dddd66fdc-jvpm8\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:04 crc kubenswrapper[4482]: I1125 07:04:04.897725 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbcjt\" (UniqueName: \"kubernetes.io/projected/f8e32069-3248-4216-a894-0ea4558d88f9-kube-api-access-jbcjt\") pod \"heat-api-dddd66fdc-jvpm8\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:05 crc kubenswrapper[4482]: I1125 07:04:05.072669 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:07 crc kubenswrapper[4482]: I1125 07:04:07.055235 4482 generic.go:334] "Generic (PLEG): container finished" podID="961bd3cf-55d9-48b0-8f63-a8c2c2942c41" containerID="8aeeb04d8a45f0028a1578da836ad37e2c561b145a97c16a4cbc933b4edbc209" exitCode=137 Nov 25 07:04:07 crc kubenswrapper[4482]: I1125 07:04:07.055308 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d554fc8c-f2fdb" event={"ID":"961bd3cf-55d9-48b0-8f63-a8c2c2942c41","Type":"ContainerDied","Data":"8aeeb04d8a45f0028a1578da836ad37e2c561b145a97c16a4cbc933b4edbc209"} Nov 25 07:04:07 crc kubenswrapper[4482]: I1125 07:04:07.059546 4482 generic.go:334] "Generic (PLEG): container finished" podID="0204e2ef-b54e-40fd-a896-d366754a5b5f" containerID="9e1af7d92fe34ad17e33f0c96dc29aa6a0740ebd19190c7322558bd36252afa8" exitCode=137 Nov 25 07:04:07 crc kubenswrapper[4482]: I1125 07:04:07.059612 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5666447f7c-7kf4h" event={"ID":"0204e2ef-b54e-40fd-a896-d366754a5b5f","Type":"ContainerDied","Data":"9e1af7d92fe34ad17e33f0c96dc29aa6a0740ebd19190c7322558bd36252afa8"} Nov 25 07:04:08 crc kubenswrapper[4482]: I1125 07:04:08.070908 4482 generic.go:334] "Generic (PLEG): container finished" podID="961bd3cf-55d9-48b0-8f63-a8c2c2942c41" containerID="bf0cd49e922ceff1640de3610179e0e09e5d4ee50f2cce197953afece2e60fa9" exitCode=137 Nov 25 07:04:08 crc kubenswrapper[4482]: I1125 07:04:08.070975 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d554fc8c-f2fdb" event={"ID":"961bd3cf-55d9-48b0-8f63-a8c2c2942c41","Type":"ContainerDied","Data":"bf0cd49e922ceff1640de3610179e0e09e5d4ee50f2cce197953afece2e60fa9"} Nov 25 07:04:08 crc kubenswrapper[4482]: I1125 07:04:08.075851 4482 generic.go:334] "Generic (PLEG): container finished" podID="a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" containerID="1b58a6c9c63d02d1c03df8a3e99942660dc44a9e6fee08f5d33872a90f509b15" exitCode=137 Nov 25 07:04:08 crc kubenswrapper[4482]: I1125 07:04:08.075907 4482 generic.go:334] "Generic (PLEG): container finished" podID="a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" containerID="e2654ff2424d40b9a2887f182e4139b04d1150ed18bc38868daa6caac58a4b4d" exitCode=137 Nov 25 07:04:08 crc kubenswrapper[4482]: I1125 07:04:08.075933 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76cc5bdc65-wzwtb" event={"ID":"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1","Type":"ContainerDied","Data":"1b58a6c9c63d02d1c03df8a3e99942660dc44a9e6fee08f5d33872a90f509b15"} Nov 25 07:04:08 crc kubenswrapper[4482]: I1125 07:04:08.075974 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76cc5bdc65-wzwtb" event={"ID":"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1","Type":"ContainerDied","Data":"e2654ff2424d40b9a2887f182e4139b04d1150ed18bc38868daa6caac58a4b4d"} Nov 25 07:04:08 crc kubenswrapper[4482]: I1125 07:04:08.082495 4482 generic.go:334] "Generic (PLEG): container finished" podID="0204e2ef-b54e-40fd-a896-d366754a5b5f" containerID="bc8e586857a5aa46d535df56f5ad048383cb1a5f158552d4efc1df3f74d3c7f6" exitCode=137 Nov 25 07:04:08 crc kubenswrapper[4482]: I1125 07:04:08.082529 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5666447f7c-7kf4h" event={"ID":"0204e2ef-b54e-40fd-a896-d366754a5b5f","Type":"ContainerDied","Data":"bc8e586857a5aa46d535df56f5ad048383cb1a5f158552d4efc1df3f74d3c7f6"} Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.073444 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z8dgz" Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.090060 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-combined-ca-bundle\") pod \"6d25c491-a613-4f52-8cb8-95d689bc3000\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.090122 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-db-sync-config-data\") pod \"6d25c491-a613-4f52-8cb8-95d689bc3000\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.090148 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-config-data\") pod \"6d25c491-a613-4f52-8cb8-95d689bc3000\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.090246 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gswq\" (UniqueName: \"kubernetes.io/projected/6d25c491-a613-4f52-8cb8-95d689bc3000-kube-api-access-8gswq\") pod \"6d25c491-a613-4f52-8cb8-95d689bc3000\" (UID: \"6d25c491-a613-4f52-8cb8-95d689bc3000\") " Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.101323 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d25c491-a613-4f52-8cb8-95d689bc3000-kube-api-access-8gswq" (OuterVolumeSpecName: "kube-api-access-8gswq") pod "6d25c491-a613-4f52-8cb8-95d689bc3000" (UID: "6d25c491-a613-4f52-8cb8-95d689bc3000"). InnerVolumeSpecName "kube-api-access-8gswq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.101604 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6d25c491-a613-4f52-8cb8-95d689bc3000" (UID: "6d25c491-a613-4f52-8cb8-95d689bc3000"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.107682 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z8dgz" event={"ID":"6d25c491-a613-4f52-8cb8-95d689bc3000","Type":"ContainerDied","Data":"7e6f10008faf27410904e345dd699b876edc1d0b012aaf3f4007a8cfd625b509"} Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.107724 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e6f10008faf27410904e345dd699b876edc1d0b012aaf3f4007a8cfd625b509" Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.107788 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z8dgz" Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.161993 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d25c491-a613-4f52-8cb8-95d689bc3000" (UID: "6d25c491-a613-4f52-8cb8-95d689bc3000"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.166757 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-config-data" (OuterVolumeSpecName: "config-data") pod "6d25c491-a613-4f52-8cb8-95d689bc3000" (UID: "6d25c491-a613-4f52-8cb8-95d689bc3000"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.195564 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.195594 4482 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.195606 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d25c491-a613-4f52-8cb8-95d689bc3000-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:09 crc kubenswrapper[4482]: I1125 07:04:09.195619 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gswq\" (UniqueName: \"kubernetes.io/projected/6d25c491-a613-4f52-8cb8-95d689bc3000-kube-api-access-8gswq\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.482454 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-759996464c-vrqp9"] Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.523334 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5557bd8f45-rxxpl"] Nov 25 07:04:10 crc kubenswrapper[4482]: E1125 07:04:10.523785 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d25c491-a613-4f52-8cb8-95d689bc3000" containerName="glance-db-sync" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.523805 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d25c491-a613-4f52-8cb8-95d689bc3000" containerName="glance-db-sync" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.524042 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d25c491-a613-4f52-8cb8-95d689bc3000" containerName="glance-db-sync" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.525031 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.552899 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5557bd8f45-rxxpl"] Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.633931 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-dns-svc\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.634059 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-ovsdbserver-nb\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.634135 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-config\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.634192 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrccq\" (UniqueName: \"kubernetes.io/projected/f9112227-4108-4545-b5ae-d9e3a5d79faa-kube-api-access-lrccq\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.634221 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-ovsdbserver-sb\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.634235 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-dns-swift-storage-0\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.735772 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-dns-svc\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.735861 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-ovsdbserver-nb\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.735915 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-config\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.735947 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrccq\" (UniqueName: \"kubernetes.io/projected/f9112227-4108-4545-b5ae-d9e3a5d79faa-kube-api-access-lrccq\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.735972 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-ovsdbserver-sb\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.737048 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-dns-svc\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.737216 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-ovsdbserver-nb\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.737235 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-ovsdbserver-sb\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.735993 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-dns-swift-storage-0\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.782632 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-dns-swift-storage-0\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.786044 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-config\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.812825 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrccq\" (UniqueName: \"kubernetes.io/projected/f9112227-4108-4545-b5ae-d9e3a5d79faa-kube-api-access-lrccq\") pod \"dnsmasq-dns-5557bd8f45-rxxpl\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:10 crc kubenswrapper[4482]: I1125 07:04:10.880578 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.087290 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-94697d564-bgxtg"] Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.089527 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.187491 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0d2c911-b73a-4216-a6c2-5642b7083f37-config-data-custom\") pod \"heat-engine-94697d564-bgxtg\" (UID: \"a0d2c911-b73a-4216-a6c2-5642b7083f37\") " pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.187984 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d2c911-b73a-4216-a6c2-5642b7083f37-combined-ca-bundle\") pod \"heat-engine-94697d564-bgxtg\" (UID: \"a0d2c911-b73a-4216-a6c2-5642b7083f37\") " pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.188037 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d2c911-b73a-4216-a6c2-5642b7083f37-config-data\") pod \"heat-engine-94697d564-bgxtg\" (UID: \"a0d2c911-b73a-4216-a6c2-5642b7083f37\") " pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.188071 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9fh\" (UniqueName: \"kubernetes.io/projected/a0d2c911-b73a-4216-a6c2-5642b7083f37-kube-api-access-8n9fh\") pod \"heat-engine-94697d564-bgxtg\" (UID: \"a0d2c911-b73a-4216-a6c2-5642b7083f37\") " pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.216225 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-94697d564-bgxtg"] Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.230526 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-8549f976cf-6szl5"] Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.232360 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.295114 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d2c911-b73a-4216-a6c2-5642b7083f37-config-data\") pod \"heat-engine-94697d564-bgxtg\" (UID: \"a0d2c911-b73a-4216-a6c2-5642b7083f37\") " pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.295194 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n9fh\" (UniqueName: \"kubernetes.io/projected/a0d2c911-b73a-4216-a6c2-5642b7083f37-kube-api-access-8n9fh\") pod \"heat-engine-94697d564-bgxtg\" (UID: \"a0d2c911-b73a-4216-a6c2-5642b7083f37\") " pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.295225 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-combined-ca-bundle\") pod \"heat-cfnapi-8549f976cf-6szl5\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.295313 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-config-data\") pod \"heat-cfnapi-8549f976cf-6szl5\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.295347 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgstp\" (UniqueName: \"kubernetes.io/projected/5c662f2a-8694-4f15-8e15-edadbbdaa093-kube-api-access-qgstp\") pod \"heat-cfnapi-8549f976cf-6szl5\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.295388 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0d2c911-b73a-4216-a6c2-5642b7083f37-config-data-custom\") pod \"heat-engine-94697d564-bgxtg\" (UID: \"a0d2c911-b73a-4216-a6c2-5642b7083f37\") " pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.295434 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-config-data-custom\") pod \"heat-cfnapi-8549f976cf-6szl5\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.295468 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d2c911-b73a-4216-a6c2-5642b7083f37-combined-ca-bundle\") pod \"heat-engine-94697d564-bgxtg\" (UID: \"a0d2c911-b73a-4216-a6c2-5642b7083f37\") " pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.299686 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d2c911-b73a-4216-a6c2-5642b7083f37-combined-ca-bundle\") pod \"heat-engine-94697d564-bgxtg\" (UID: \"a0d2c911-b73a-4216-a6c2-5642b7083f37\") " pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.321644 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0d2c911-b73a-4216-a6c2-5642b7083f37-config-data-custom\") pod \"heat-engine-94697d564-bgxtg\" (UID: \"a0d2c911-b73a-4216-a6c2-5642b7083f37\") " pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.326345 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d2c911-b73a-4216-a6c2-5642b7083f37-config-data\") pod \"heat-engine-94697d564-bgxtg\" (UID: \"a0d2c911-b73a-4216-a6c2-5642b7083f37\") " pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.333421 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8549f976cf-6szl5"] Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.349888 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6bf74b5bc8-nqmwd"] Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.353145 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.369271 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6bf74b5bc8-nqmwd"] Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.401224 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt7nr\" (UniqueName: \"kubernetes.io/projected/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-kube-api-access-pt7nr\") pod \"heat-api-6bf74b5bc8-nqmwd\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.401483 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-combined-ca-bundle\") pod \"heat-cfnapi-8549f976cf-6szl5\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.401571 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-config-data-custom\") pod \"heat-api-6bf74b5bc8-nqmwd\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.401702 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-config-data\") pod \"heat-api-6bf74b5bc8-nqmwd\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.401771 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-config-data\") pod \"heat-cfnapi-8549f976cf-6szl5\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.401847 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgstp\" (UniqueName: \"kubernetes.io/projected/5c662f2a-8694-4f15-8e15-edadbbdaa093-kube-api-access-qgstp\") pod \"heat-cfnapi-8549f976cf-6szl5\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.401919 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-combined-ca-bundle\") pod \"heat-api-6bf74b5bc8-nqmwd\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.402029 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-config-data-custom\") pod \"heat-cfnapi-8549f976cf-6szl5\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.406378 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n9fh\" (UniqueName: \"kubernetes.io/projected/a0d2c911-b73a-4216-a6c2-5642b7083f37-kube-api-access-8n9fh\") pod \"heat-engine-94697d564-bgxtg\" (UID: \"a0d2c911-b73a-4216-a6c2-5642b7083f37\") " pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.417460 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.418475 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-config-data-custom\") pod \"heat-cfnapi-8549f976cf-6szl5\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.419310 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-combined-ca-bundle\") pod \"heat-cfnapi-8549f976cf-6szl5\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.426553 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-config-data\") pod \"heat-cfnapi-8549f976cf-6szl5\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.434303 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgstp\" (UniqueName: \"kubernetes.io/projected/5c662f2a-8694-4f15-8e15-edadbbdaa093-kube-api-access-qgstp\") pod \"heat-cfnapi-8549f976cf-6szl5\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.507903 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt7nr\" (UniqueName: \"kubernetes.io/projected/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-kube-api-access-pt7nr\") pod \"heat-api-6bf74b5bc8-nqmwd\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.508034 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-config-data-custom\") pod \"heat-api-6bf74b5bc8-nqmwd\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.508179 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-config-data\") pod \"heat-api-6bf74b5bc8-nqmwd\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.508242 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-combined-ca-bundle\") pod \"heat-api-6bf74b5bc8-nqmwd\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.512077 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-config-data-custom\") pod \"heat-api-6bf74b5bc8-nqmwd\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.515494 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-config-data\") pod \"heat-api-6bf74b5bc8-nqmwd\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.519438 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-combined-ca-bundle\") pod \"heat-api-6bf74b5bc8-nqmwd\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.548354 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.550106 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.555794 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt7nr\" (UniqueName: \"kubernetes.io/projected/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-kube-api-access-pt7nr\") pod \"heat-api-6bf74b5bc8-nqmwd\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.560091 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.563855 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.564946 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.565131 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nc9ld" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.565368 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.611438 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/278924b6-38eb-418e-87b6-be1872ee5464-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.611606 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.611712 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-scripts\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.611783 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-config-data\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.611854 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.612075 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/278924b6-38eb-418e-87b6-be1872ee5464-logs\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.612239 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntf4x\" (UniqueName: \"kubernetes.io/projected/278924b6-38eb-418e-87b6-be1872ee5464-kube-api-access-ntf4x\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.652892 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.654948 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.658780 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.663297 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715414 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26825\" (UniqueName: \"kubernetes.io/projected/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-kube-api-access-26825\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715467 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715513 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/278924b6-38eb-418e-87b6-be1872ee5464-logs\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715532 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715557 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715584 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntf4x\" (UniqueName: \"kubernetes.io/projected/278924b6-38eb-418e-87b6-be1872ee5464-kube-api-access-ntf4x\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715686 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/278924b6-38eb-418e-87b6-be1872ee5464-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715705 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715735 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715759 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-scripts\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715774 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715790 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-config-data\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715810 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.715835 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.716867 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.716920 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/278924b6-38eb-418e-87b6-be1872ee5464-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.717555 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/278924b6-38eb-418e-87b6-be1872ee5464-logs\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.727881 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-config-data\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.730416 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-scripts\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.730482 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.731719 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntf4x\" (UniqueName: \"kubernetes.io/projected/278924b6-38eb-418e-87b6-be1872ee5464-kube-api-access-ntf4x\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.754693 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.758646 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.817868 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26825\" (UniqueName: \"kubernetes.io/projected/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-kube-api-access-26825\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.817918 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.817958 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.817981 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.818078 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.818115 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.818152 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.818342 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.818839 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.819722 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.827188 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.829645 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.829872 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.837007 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26825\" (UniqueName: \"kubernetes.io/projected/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-kube-api-access-26825\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.849790 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.885002 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 07:04:11 crc kubenswrapper[4482]: I1125 07:04:11.981610 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:12 crc kubenswrapper[4482]: I1125 07:04:12.068408 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.375553 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f98797bb6-chb76"] Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.385583 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-dddd66fdc-jvpm8"] Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.406210 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-b57c4d7bd-prkv2"] Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.408160 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.411614 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.411763 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.433970 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-55c7dc97f5-ffnl6"] Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.435031 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.447578 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.447975 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.453853 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-b57c4d7bd-prkv2"] Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.457403 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-public-tls-certs\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.457469 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-config-data\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.457484 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pzkx\" (UniqueName: \"kubernetes.io/projected/33132915-ebcf-4d71-83af-26542eb68ac6-kube-api-access-6pzkx\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.457513 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-config-data-custom\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.457538 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-config-data\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.457582 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-combined-ca-bundle\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.457640 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-config-data-custom\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.457699 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-internal-tls-certs\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.457713 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6vwf\" (UniqueName: \"kubernetes.io/projected/2db5521c-32ce-484e-a9a8-6481deedd275-kube-api-access-w6vwf\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.457734 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-combined-ca-bundle\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.457753 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-internal-tls-certs\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.457779 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-public-tls-certs\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.470440 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-55c7dc97f5-ffnl6"] Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.559638 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-combined-ca-bundle\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.559733 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-config-data-custom\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.559869 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-internal-tls-certs\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.559886 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6vwf\" (UniqueName: \"kubernetes.io/projected/2db5521c-32ce-484e-a9a8-6481deedd275-kube-api-access-w6vwf\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.560005 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-combined-ca-bundle\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.560025 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-internal-tls-certs\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.560044 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-public-tls-certs\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.560156 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-public-tls-certs\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.560307 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-config-data\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.560332 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pzkx\" (UniqueName: \"kubernetes.io/projected/33132915-ebcf-4d71-83af-26542eb68ac6-kube-api-access-6pzkx\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.563386 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-config-data-custom\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.563475 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-config-data\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.568869 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-config-data\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.569020 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-internal-tls-certs\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.569325 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-public-tls-certs\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.571700 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-config-data-custom\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.574001 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-config-data\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.576264 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-combined-ca-bundle\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.580901 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-config-data-custom\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.586115 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6vwf\" (UniqueName: \"kubernetes.io/projected/2db5521c-32ce-484e-a9a8-6481deedd275-kube-api-access-w6vwf\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.590641 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-public-tls-certs\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.591732 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33132915-ebcf-4d71-83af-26542eb68ac6-internal-tls-certs\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.594928 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pzkx\" (UniqueName: \"kubernetes.io/projected/33132915-ebcf-4d71-83af-26542eb68ac6-kube-api-access-6pzkx\") pod \"heat-cfnapi-b57c4d7bd-prkv2\" (UID: \"33132915-ebcf-4d71-83af-26542eb68ac6\") " pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.603383 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db5521c-32ce-484e-a9a8-6481deedd275-combined-ca-bundle\") pod \"heat-api-55c7dc97f5-ffnl6\" (UID: \"2db5521c-32ce-484e-a9a8-6481deedd275\") " pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.727564 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:13 crc kubenswrapper[4482]: I1125 07:04:13.775686 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:14 crc kubenswrapper[4482]: I1125 07:04:14.350753 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:04:14 crc kubenswrapper[4482]: I1125 07:04:14.443623 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:04:14 crc kubenswrapper[4482]: I1125 07:04:14.879832 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 07:04:16 crc kubenswrapper[4482]: I1125 07:04:16.500729 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7949b4656d-jjsj8" Nov 25 07:04:16 crc kubenswrapper[4482]: I1125 07:04:16.569638 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fbb9df54d-nfljm"] Nov 25 07:04:16 crc kubenswrapper[4482]: I1125 07:04:16.570268 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fbb9df54d-nfljm" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon" containerID="cri-o://b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c" gracePeriod=30 Nov 25 07:04:16 crc kubenswrapper[4482]: I1125 07:04:16.569919 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fbb9df54d-nfljm" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon-log" containerID="cri-o://bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912" gracePeriod=30 Nov 25 07:04:16 crc kubenswrapper[4482]: I1125 07:04:16.600244 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fbb9df54d-nfljm" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Nov 25 07:04:20 crc kubenswrapper[4482]: I1125 07:04:20.006934 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fbb9df54d-nfljm" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:37456->10.217.0.150:8443: read: connection reset by peer" Nov 25 07:04:20 crc kubenswrapper[4482]: I1125 07:04:20.275445 4482 generic.go:334] "Generic (PLEG): container finished" podID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerID="b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c" exitCode=0 Nov 25 07:04:20 crc kubenswrapper[4482]: I1125 07:04:20.275676 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fbb9df54d-nfljm" event={"ID":"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db","Type":"ContainerDied","Data":"b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c"} Nov 25 07:04:21 crc kubenswrapper[4482]: I1125 07:04:21.595291 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fbb9df54d-nfljm" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Nov 25 07:04:24 crc kubenswrapper[4482]: E1125 07:04:24.117493 4482 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24@sha256:8536169e5537fe6c330eba814248abdcf39cdd8f7e7336034d74e6fda9544050" Nov 25 07:04:24 crc kubenswrapper[4482]: E1125 07:04:24.118135 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:8536169e5537fe6c330eba814248abdcf39cdd8f7e7336034d74e6fda9544050,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gx7r8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b2c0ac8f-2b76-45a3-af85-5990913bc03a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 07:04:24 crc kubenswrapper[4482]: E1125 07:04:24.119731 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.249533 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.271522 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-horizon-secret-key\") pod \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.271603 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-logs\") pod \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.271705 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-scripts\") pod \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.271775 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8bvc\" (UniqueName: \"kubernetes.io/projected/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-kube-api-access-g8bvc\") pod \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.271880 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-config-data\") pod \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\" (UID: \"961bd3cf-55d9-48b0-8f63-a8c2c2942c41\") " Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.273565 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-logs" (OuterVolumeSpecName: "logs") pod "961bd3cf-55d9-48b0-8f63-a8c2c2942c41" (UID: "961bd3cf-55d9-48b0-8f63-a8c2c2942c41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.277763 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "961bd3cf-55d9-48b0-8f63-a8c2c2942c41" (UID: "961bd3cf-55d9-48b0-8f63-a8c2c2942c41"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.278832 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-kube-api-access-g8bvc" (OuterVolumeSpecName: "kube-api-access-g8bvc") pod "961bd3cf-55d9-48b0-8f63-a8c2c2942c41" (UID: "961bd3cf-55d9-48b0-8f63-a8c2c2942c41"). InnerVolumeSpecName "kube-api-access-g8bvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.327930 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-scripts" (OuterVolumeSpecName: "scripts") pod "961bd3cf-55d9-48b0-8f63-a8c2c2942c41" (UID: "961bd3cf-55d9-48b0-8f63-a8c2c2942c41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.329330 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d554fc8c-f2fdb" event={"ID":"961bd3cf-55d9-48b0-8f63-a8c2c2942c41","Type":"ContainerDied","Data":"da65c75ab379384341d22f4f0f222fc35f34a5dbdfeafcfec05d07ff228cc94c"} Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.329381 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d554fc8c-f2fdb" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.329442 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerName="sg-core" containerID="cri-o://ce606b7d230b3f87476793909ca2a8a5c173cab166c165a1e8d4d5669eabb34e" gracePeriod=30 Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.329459 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerName="ceilometer-notification-agent" containerID="cri-o://263e528ee7c793c546f9a438b4f1ef055b77e1781dd02fdce8655af5d75c9bb1" gracePeriod=30 Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.329386 4482 scope.go:117] "RemoveContainer" containerID="bf0cd49e922ceff1640de3610179e0e09e5d4ee50f2cce197953afece2e60fa9" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.333778 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-config-data" (OuterVolumeSpecName: "config-data") pod "961bd3cf-55d9-48b0-8f63-a8c2c2942c41" (UID: "961bd3cf-55d9-48b0-8f63-a8c2c2942c41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.329035 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerName="ceilometer-central-agent" containerID="cri-o://40be42855bfac49bec1255396dd5e074aaecd8d028edf160e46ceab36f50c2dd" gracePeriod=30 Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.375413 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.375454 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.375468 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8bvc\" (UniqueName: \"kubernetes.io/projected/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-kube-api-access-g8bvc\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.375482 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.375494 4482 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/961bd3cf-55d9-48b0-8f63-a8c2c2942c41-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.660551 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78d554fc8c-f2fdb"] Nov 25 07:04:24 crc kubenswrapper[4482]: I1125 07:04:24.670295 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78d554fc8c-f2fdb"] Nov 25 07:04:25 crc kubenswrapper[4482]: I1125 07:04:25.344288 4482 generic.go:334] "Generic (PLEG): container finished" podID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerID="ce606b7d230b3f87476793909ca2a8a5c173cab166c165a1e8d4d5669eabb34e" exitCode=2 Nov 25 07:04:25 crc kubenswrapper[4482]: I1125 07:04:25.344617 4482 generic.go:334] "Generic (PLEG): container finished" podID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerID="40be42855bfac49bec1255396dd5e074aaecd8d028edf160e46ceab36f50c2dd" exitCode=0 Nov 25 07:04:25 crc kubenswrapper[4482]: I1125 07:04:25.344472 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2c0ac8f-2b76-45a3-af85-5990913bc03a","Type":"ContainerDied","Data":"ce606b7d230b3f87476793909ca2a8a5c173cab166c165a1e8d4d5669eabb34e"} Nov 25 07:04:25 crc kubenswrapper[4482]: I1125 07:04:25.344680 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2c0ac8f-2b76-45a3-af85-5990913bc03a","Type":"ContainerDied","Data":"40be42855bfac49bec1255396dd5e074aaecd8d028edf160e46ceab36f50c2dd"} Nov 25 07:04:25 crc kubenswrapper[4482]: I1125 07:04:25.853398 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="961bd3cf-55d9-48b0-8f63-a8c2c2942c41" path="/var/lib/kubelet/pods/961bd3cf-55d9-48b0-8f63-a8c2c2942c41/volumes" Nov 25 07:04:25 crc kubenswrapper[4482]: E1125 07:04:25.877544 4482 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:1f5c0439f2433cb462b222a5bb23e629" Nov 25 07:04:25 crc kubenswrapper[4482]: E1125 07:04:25.877626 4482 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:1f5c0439f2433cb462b222a5bb23e629" Nov 25 07:04:25 crc kubenswrapper[4482]: E1125 07:04:25.877797 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:1f5c0439f2433cb462b222a5bb23e629,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6x86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-ggvxs_openstack(6f1385f6-5258-4372-a20a-30a7229ec2e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:04:25 crc kubenswrapper[4482]: E1125 07:04:25.878992 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-ggvxs" podUID="6f1385f6-5258-4372-a20a-30a7229ec2e8" Nov 25 07:04:25 crc kubenswrapper[4482]: I1125 07:04:25.975310 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.000090 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.014231 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-logs\") pod \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.014331 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0204e2ef-b54e-40fd-a896-d366754a5b5f-horizon-secret-key\") pod \"0204e2ef-b54e-40fd-a896-d366754a5b5f\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.014422 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjzsv\" (UniqueName: \"kubernetes.io/projected/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-kube-api-access-wjzsv\") pod \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.014451 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgmsm\" (UniqueName: \"kubernetes.io/projected/0204e2ef-b54e-40fd-a896-d366754a5b5f-kube-api-access-fgmsm\") pod \"0204e2ef-b54e-40fd-a896-d366754a5b5f\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.014547 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0204e2ef-b54e-40fd-a896-d366754a5b5f-logs\") pod \"0204e2ef-b54e-40fd-a896-d366754a5b5f\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.014589 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0204e2ef-b54e-40fd-a896-d366754a5b5f-scripts\") pod \"0204e2ef-b54e-40fd-a896-d366754a5b5f\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.014634 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-horizon-secret-key\") pod \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.014653 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0204e2ef-b54e-40fd-a896-d366754a5b5f-config-data\") pod \"0204e2ef-b54e-40fd-a896-d366754a5b5f\" (UID: \"0204e2ef-b54e-40fd-a896-d366754a5b5f\") " Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.015130 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-scripts\") pod \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.015283 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-config-data\") pod \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\" (UID: \"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1\") " Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.017041 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0204e2ef-b54e-40fd-a896-d366754a5b5f-logs" (OuterVolumeSpecName: "logs") pod "0204e2ef-b54e-40fd-a896-d366754a5b5f" (UID: "0204e2ef-b54e-40fd-a896-d366754a5b5f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.035644 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-logs" (OuterVolumeSpecName: "logs") pod "a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" (UID: "a4ff9cda-d978-4d85-a14f-7e7ae2157ea1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.037442 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-kube-api-access-wjzsv" (OuterVolumeSpecName: "kube-api-access-wjzsv") pod "a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" (UID: "a4ff9cda-d978-4d85-a14f-7e7ae2157ea1"). InnerVolumeSpecName "kube-api-access-wjzsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.037986 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" (UID: "a4ff9cda-d978-4d85-a14f-7e7ae2157ea1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.038012 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0204e2ef-b54e-40fd-a896-d366754a5b5f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0204e2ef-b54e-40fd-a896-d366754a5b5f" (UID: "0204e2ef-b54e-40fd-a896-d366754a5b5f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.043156 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0204e2ef-b54e-40fd-a896-d366754a5b5f-kube-api-access-fgmsm" (OuterVolumeSpecName: "kube-api-access-fgmsm") pod "0204e2ef-b54e-40fd-a896-d366754a5b5f" (UID: "0204e2ef-b54e-40fd-a896-d366754a5b5f"). InnerVolumeSpecName "kube-api-access-fgmsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.057462 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0204e2ef-b54e-40fd-a896-d366754a5b5f-config-data" (OuterVolumeSpecName: "config-data") pod "0204e2ef-b54e-40fd-a896-d366754a5b5f" (UID: "0204e2ef-b54e-40fd-a896-d366754a5b5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.071512 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0204e2ef-b54e-40fd-a896-d366754a5b5f-scripts" (OuterVolumeSpecName: "scripts") pod "0204e2ef-b54e-40fd-a896-d366754a5b5f" (UID: "0204e2ef-b54e-40fd-a896-d366754a5b5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.123774 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-config-data" (OuterVolumeSpecName: "config-data") pod "a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" (UID: "a4ff9cda-d978-4d85-a14f-7e7ae2157ea1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.124606 4482 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.124641 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0204e2ef-b54e-40fd-a896-d366754a5b5f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.124654 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.124667 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.124686 4482 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0204e2ef-b54e-40fd-a896-d366754a5b5f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.124697 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjzsv\" (UniqueName: \"kubernetes.io/projected/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-kube-api-access-wjzsv\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.124717 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgmsm\" (UniqueName: \"kubernetes.io/projected/0204e2ef-b54e-40fd-a896-d366754a5b5f-kube-api-access-fgmsm\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.124729 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0204e2ef-b54e-40fd-a896-d366754a5b5f-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.124737 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0204e2ef-b54e-40fd-a896-d366754a5b5f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.137654 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-scripts" (OuterVolumeSpecName: "scripts") pod "a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" (UID: "a4ff9cda-d978-4d85-a14f-7e7ae2157ea1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.227783 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.358727 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76cc5bdc65-wzwtb" event={"ID":"a4ff9cda-d978-4d85-a14f-7e7ae2157ea1","Type":"ContainerDied","Data":"b66254d166d6319c707e0dffdc8870b438f9992483734cb1863c72ac7f46c018"} Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.361796 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76cc5bdc65-wzwtb" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.362858 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5666447f7c-7kf4h" event={"ID":"0204e2ef-b54e-40fd-a896-d366754a5b5f","Type":"ContainerDied","Data":"e8bbbadee526ba1b69fc08ba3da366e060d1152ab5cce94d510fff496bc72bc9"} Nov 25 07:04:26 crc kubenswrapper[4482]: E1125 07:04:26.365297 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:1f5c0439f2433cb462b222a5bb23e629\\\"\"" pod="openstack/cinder-db-sync-ggvxs" podUID="6f1385f6-5258-4372-a20a-30a7229ec2e8" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.366388 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5666447f7c-7kf4h" Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.403328 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6fccbbd848-gp8qx"] Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.422463 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76cc5bdc65-wzwtb"] Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.430275 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76cc5bdc65-wzwtb"] Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.475579 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5666447f7c-7kf4h"] Nov 25 07:04:26 crc kubenswrapper[4482]: I1125 07:04:26.489220 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5666447f7c-7kf4h"] Nov 25 07:04:27 crc kubenswrapper[4482]: I1125 07:04:27.841736 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0204e2ef-b54e-40fd-a896-d366754a5b5f" path="/var/lib/kubelet/pods/0204e2ef-b54e-40fd-a896-d366754a5b5f/volumes" Nov 25 07:04:27 crc kubenswrapper[4482]: I1125 07:04:27.842893 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" path="/var/lib/kubelet/pods/a4ff9cda-d978-4d85-a14f-7e7ae2157ea1/volumes" Nov 25 07:04:29 crc kubenswrapper[4482]: I1125 07:04:29.400723 4482 generic.go:334] "Generic (PLEG): container finished" podID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerID="263e528ee7c793c546f9a438b4f1ef055b77e1781dd02fdce8655af5d75c9bb1" exitCode=0 Nov 25 07:04:29 crc kubenswrapper[4482]: I1125 07:04:29.400967 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2c0ac8f-2b76-45a3-af85-5990913bc03a","Type":"ContainerDied","Data":"263e528ee7c793c546f9a438b4f1ef055b77e1781dd02fdce8655af5d75c9bb1"} Nov 25 07:04:31 crc kubenswrapper[4482]: I1125 07:04:31.595123 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fbb9df54d-nfljm" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Nov 25 07:04:31 crc kubenswrapper[4482]: W1125 07:04:31.791273 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bda1dfd_9f8b_4fbd_8093_689b7afada79.slice/crio-362675109e4ea32204b4fc54868afd10dc002831e54cfd158d67dab1ddd35e08 WatchSource:0}: Error finding container 362675109e4ea32204b4fc54868afd10dc002831e54cfd158d67dab1ddd35e08: Status 404 returned error can't find the container with id 362675109e4ea32204b4fc54868afd10dc002831e54cfd158d67dab1ddd35e08 Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.101270 4482 scope.go:117] "RemoveContainer" containerID="8aeeb04d8a45f0028a1578da836ad37e2c561b145a97c16a4cbc933b4edbc209" Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.323148 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-dddd66fdc-jvpm8"] Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.457863 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6fccbbd848-gp8qx" event={"ID":"5bda1dfd-9f8b-4fbd-8093-689b7afada79","Type":"ContainerStarted","Data":"362675109e4ea32204b4fc54868afd10dc002831e54cfd158d67dab1ddd35e08"} Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.573989 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8549f976cf-6szl5"] Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.582989 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f98797bb6-chb76"] Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.588979 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-55c7dc97f5-ffnl6"] Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.597535 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-94697d564-bgxtg"] Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.603344 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-759996464c-vrqp9"] Nov 25 07:04:32 crc kubenswrapper[4482]: W1125 07:04:32.748450 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8e32069_3248_4216_a894_0ea4558d88f9.slice/crio-d190adaab1cdb3d7cc705153204f45f896f6d349161027e68b037c812e52c8ba WatchSource:0}: Error finding container d190adaab1cdb3d7cc705153204f45f896f6d349161027e68b037c812e52c8ba: Status 404 returned error can't find the container with id d190adaab1cdb3d7cc705153204f45f896f6d349161027e68b037c812e52c8ba Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.888797 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6bf74b5bc8-nqmwd"] Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.893785 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.908502 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-sg-core-conf-yaml\") pod \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.908537 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-scripts\") pod \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.908560 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx7r8\" (UniqueName: \"kubernetes.io/projected/b2c0ac8f-2b76-45a3-af85-5990913bc03a-kube-api-access-gx7r8\") pod \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.908678 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-config-data\") pod \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.908831 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2c0ac8f-2b76-45a3-af85-5990913bc03a-run-httpd\") pod \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.908889 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-combined-ca-bundle\") pod \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.908929 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2c0ac8f-2b76-45a3-af85-5990913bc03a-log-httpd\") pod \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\" (UID: \"b2c0ac8f-2b76-45a3-af85-5990913bc03a\") " Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.911230 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2c0ac8f-2b76-45a3-af85-5990913bc03a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2c0ac8f-2b76-45a3-af85-5990913bc03a" (UID: "b2c0ac8f-2b76-45a3-af85-5990913bc03a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.911770 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2c0ac8f-2b76-45a3-af85-5990913bc03a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2c0ac8f-2b76-45a3-af85-5990913bc03a" (UID: "b2c0ac8f-2b76-45a3-af85-5990913bc03a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.921791 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c0ac8f-2b76-45a3-af85-5990913bc03a-kube-api-access-gx7r8" (OuterVolumeSpecName: "kube-api-access-gx7r8") pod "b2c0ac8f-2b76-45a3-af85-5990913bc03a" (UID: "b2c0ac8f-2b76-45a3-af85-5990913bc03a"). InnerVolumeSpecName "kube-api-access-gx7r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.940063 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-scripts" (OuterVolumeSpecName: "scripts") pod "b2c0ac8f-2b76-45a3-af85-5990913bc03a" (UID: "b2c0ac8f-2b76-45a3-af85-5990913bc03a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.954837 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2c0ac8f-2b76-45a3-af85-5990913bc03a" (UID: "b2c0ac8f-2b76-45a3-af85-5990913bc03a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.974598 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 07:04:32 crc kubenswrapper[4482]: I1125 07:04:32.995411 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2c0ac8f-2b76-45a3-af85-5990913bc03a" (UID: "b2c0ac8f-2b76-45a3-af85-5990913bc03a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.002707 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-config-data" (OuterVolumeSpecName: "config-data") pod "b2c0ac8f-2b76-45a3-af85-5990913bc03a" (UID: "b2c0ac8f-2b76-45a3-af85-5990913bc03a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.010389 4482 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2c0ac8f-2b76-45a3-af85-5990913bc03a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.010423 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.010438 4482 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2c0ac8f-2b76-45a3-af85-5990913bc03a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.010448 4482 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.010457 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.010465 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx7r8\" (UniqueName: \"kubernetes.io/projected/b2c0ac8f-2b76-45a3-af85-5990913bc03a-kube-api-access-gx7r8\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.010476 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c0ac8f-2b76-45a3-af85-5990913bc03a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.161695 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-b57c4d7bd-prkv2"] Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.266870 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5557bd8f45-rxxpl"] Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.486184 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-dddd66fdc-jvpm8" event={"ID":"f8e32069-3248-4216-a894-0ea4558d88f9","Type":"ContainerStarted","Data":"d190adaab1cdb3d7cc705153204f45f896f6d349161027e68b037c812e52c8ba"} Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.487565 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f98797bb6-chb76" event={"ID":"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2","Type":"ContainerStarted","Data":"61465e247380cd5be8f0901fa7f72a34e7d1faf428f3a6fe2658bf41dc8896c0"} Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.489965 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.497618 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2c0ac8f-2b76-45a3-af85-5990913bc03a","Type":"ContainerDied","Data":"8fd933553d12649d14af6df1346f4151f38368f80bf42e562bd2db1971aa80a8"} Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.497721 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.505613 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55c7dc97f5-ffnl6" event={"ID":"2db5521c-32ce-484e-a9a8-6481deedd275","Type":"ContainerStarted","Data":"031bd6b2a729cd37caa77b6c88a9a1c11b06e07c4881f7461b574fe754a97a9b"} Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.509591 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759996464c-vrqp9" event={"ID":"b0810e3e-ce88-42f5-a47d-8e101088577b","Type":"ContainerStarted","Data":"f7e466144c93212055b680387ec868cc9d40c70999a98254dc9933b2a42d2aa6"} Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.510955 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-94697d564-bgxtg" event={"ID":"a0d2c911-b73a-4216-a6c2-5642b7083f37","Type":"ContainerStarted","Data":"5d972e7a6d0db0fd603571300c1c0c8ef5394d17680b1a0920a43f590b9f3d7a"} Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.520041 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8549f976cf-6szl5" event={"ID":"5c662f2a-8694-4f15-8e15-edadbbdaa093","Type":"ContainerStarted","Data":"688f48748edfb644ba06c27632e25127f57ceec85bb3461c2856a6489f3930b0"} Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.623537 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.644570 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.652305 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:04:33 crc kubenswrapper[4482]: E1125 07:04:33.652779 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerName="sg-core" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.652885 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerName="sg-core" Nov 25 07:04:33 crc kubenswrapper[4482]: E1125 07:04:33.652898 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" containerName="horizon-log" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.652993 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" containerName="horizon-log" Nov 25 07:04:33 crc kubenswrapper[4482]: E1125 07:04:33.653025 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerName="ceilometer-notification-agent" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653032 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerName="ceilometer-notification-agent" Nov 25 07:04:33 crc kubenswrapper[4482]: E1125 07:04:33.653042 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961bd3cf-55d9-48b0-8f63-a8c2c2942c41" containerName="horizon-log" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653049 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="961bd3cf-55d9-48b0-8f63-a8c2c2942c41" containerName="horizon-log" Nov 25 07:04:33 crc kubenswrapper[4482]: E1125 07:04:33.653063 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" containerName="horizon" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653070 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" containerName="horizon" Nov 25 07:04:33 crc kubenswrapper[4482]: E1125 07:04:33.653089 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0204e2ef-b54e-40fd-a896-d366754a5b5f" containerName="horizon-log" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653094 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0204e2ef-b54e-40fd-a896-d366754a5b5f" containerName="horizon-log" Nov 25 07:04:33 crc kubenswrapper[4482]: E1125 07:04:33.653106 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0204e2ef-b54e-40fd-a896-d366754a5b5f" containerName="horizon" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653112 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0204e2ef-b54e-40fd-a896-d366754a5b5f" containerName="horizon" Nov 25 07:04:33 crc kubenswrapper[4482]: E1125 07:04:33.653121 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961bd3cf-55d9-48b0-8f63-a8c2c2942c41" containerName="horizon" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653126 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="961bd3cf-55d9-48b0-8f63-a8c2c2942c41" containerName="horizon" Nov 25 07:04:33 crc kubenswrapper[4482]: E1125 07:04:33.653135 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerName="ceilometer-central-agent" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653140 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerName="ceilometer-central-agent" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653374 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="961bd3cf-55d9-48b0-8f63-a8c2c2942c41" containerName="horizon" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653386 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="0204e2ef-b54e-40fd-a896-d366754a5b5f" containerName="horizon-log" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653396 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" containerName="horizon-log" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653403 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="0204e2ef-b54e-40fd-a896-d366754a5b5f" containerName="horizon" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653417 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerName="sg-core" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653426 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerName="ceilometer-central-agent" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653441 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ff9cda-d978-4d85-a14f-7e7ae2157ea1" containerName="horizon" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653451 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="961bd3cf-55d9-48b0-8f63-a8c2c2942c41" containerName="horizon-log" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.653463 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" containerName="ceilometer-notification-agent" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.655929 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.658489 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.659939 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.660605 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.742576 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-config-data\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.742673 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdz99\" (UniqueName: \"kubernetes.io/projected/923dd3f7-190f-4715-a057-3eb83c260918-kube-api-access-jdz99\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.742730 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.742889 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923dd3f7-190f-4715-a057-3eb83c260918-log-httpd\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.742955 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923dd3f7-190f-4715-a057-3eb83c260918-run-httpd\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.743010 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-scripts\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.743024 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.844559 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c0ac8f-2b76-45a3-af85-5990913bc03a" path="/var/lib/kubelet/pods/b2c0ac8f-2b76-45a3-af85-5990913bc03a/volumes" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.846572 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdz99\" (UniqueName: \"kubernetes.io/projected/923dd3f7-190f-4715-a057-3eb83c260918-kube-api-access-jdz99\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.846629 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.846692 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923dd3f7-190f-4715-a057-3eb83c260918-log-httpd\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.846730 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923dd3f7-190f-4715-a057-3eb83c260918-run-httpd\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.846870 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-scripts\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.846900 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.846922 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-config-data\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.848459 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923dd3f7-190f-4715-a057-3eb83c260918-run-httpd\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.848523 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923dd3f7-190f-4715-a057-3eb83c260918-log-httpd\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.853556 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-scripts\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.854292 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-config-data\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.855748 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.855911 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.865388 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdz99\" (UniqueName: \"kubernetes.io/projected/923dd3f7-190f-4715-a057-3eb83c260918-kube-api-access-jdz99\") pod \"ceilometer-0\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " pod="openstack/ceilometer-0" Nov 25 07:04:33 crc kubenswrapper[4482]: I1125 07:04:33.976292 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:04:36 crc kubenswrapper[4482]: I1125 07:04:36.779124 4482 scope.go:117] "RemoveContainer" containerID="1b58a6c9c63d02d1c03df8a3e99942660dc44a9e6fee08f5d33872a90f509b15" Nov 25 07:04:37 crc kubenswrapper[4482]: W1125 07:04:37.046704 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33132915_ebcf_4d71_83af_26542eb68ac6.slice/crio-a32923af3c9c24899fc3cb7d0282c8ab835784dc25db86f5a5dddb326ea613c2 WatchSource:0}: Error finding container a32923af3c9c24899fc3cb7d0282c8ab835784dc25db86f5a5dddb326ea613c2: Status 404 returned error can't find the container with id a32923af3c9c24899fc3cb7d0282c8ab835784dc25db86f5a5dddb326ea613c2 Nov 25 07:04:37 crc kubenswrapper[4482]: W1125 07:04:37.054241 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9112227_4108_4545_b5ae_d9e3a5d79faa.slice/crio-bf139662224a9ddec6267f73b289879adc9d85c3f2c22b0f1ca82ac86f8f8201 WatchSource:0}: Error finding container bf139662224a9ddec6267f73b289879adc9d85c3f2c22b0f1ca82ac86f8f8201: Status 404 returned error can't find the container with id bf139662224a9ddec6267f73b289879adc9d85c3f2c22b0f1ca82ac86f8f8201 Nov 25 07:04:37 crc kubenswrapper[4482]: I1125 07:04:37.254584 4482 scope.go:117] "RemoveContainer" containerID="e2654ff2424d40b9a2887f182e4139b04d1150ed18bc38868daa6caac58a4b4d" Nov 25 07:04:37 crc kubenswrapper[4482]: I1125 07:04:37.326994 4482 scope.go:117] "RemoveContainer" containerID="bc8e586857a5aa46d535df56f5ad048383cb1a5f158552d4efc1df3f74d3c7f6" Nov 25 07:04:37 crc kubenswrapper[4482]: I1125 07:04:37.567424 4482 scope.go:117] "RemoveContainer" containerID="9e1af7d92fe34ad17e33f0c96dc29aa6a0740ebd19190c7322558bd36252afa8" Nov 25 07:04:37 crc kubenswrapper[4482]: I1125 07:04:37.571467 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" event={"ID":"33132915-ebcf-4d71-83af-26542eb68ac6","Type":"ContainerStarted","Data":"a32923af3c9c24899fc3cb7d0282c8ab835784dc25db86f5a5dddb326ea613c2"} Nov 25 07:04:37 crc kubenswrapper[4482]: I1125 07:04:37.578394 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6","Type":"ContainerStarted","Data":"585b321496fa83b5ddffbcfceea8ff7f168693a4c04fae67bd76f3ecc129d84b"} Nov 25 07:04:37 crc kubenswrapper[4482]: I1125 07:04:37.580272 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" event={"ID":"f9112227-4108-4545-b5ae-d9e3a5d79faa","Type":"ContainerStarted","Data":"bf139662224a9ddec6267f73b289879adc9d85c3f2c22b0f1ca82ac86f8f8201"} Nov 25 07:04:37 crc kubenswrapper[4482]: I1125 07:04:37.581915 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6bf74b5bc8-nqmwd" event={"ID":"fc2d466d-9429-472d-b1a4-cccf7da7f5fc","Type":"ContainerStarted","Data":"b0c44faceaf7ad098be394ae25ab68db63939fc3b7c94f4b762a5f92b7c8dbf8"} Nov 25 07:04:37 crc kubenswrapper[4482]: I1125 07:04:37.584435 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"278924b6-38eb-418e-87b6-be1872ee5464","Type":"ContainerStarted","Data":"500c0aae1a7ff13de680ecd1ac68a4f5e8e8a5d0348ec1bbec5e2db206d1b578"} Nov 25 07:04:37 crc kubenswrapper[4482]: I1125 07:04:37.634595 4482 scope.go:117] "RemoveContainer" containerID="ce606b7d230b3f87476793909ca2a8a5c173cab166c165a1e8d4d5669eabb34e" Nov 25 07:04:37 crc kubenswrapper[4482]: I1125 07:04:37.693517 4482 scope.go:117] "RemoveContainer" containerID="263e528ee7c793c546f9a438b4f1ef055b77e1781dd02fdce8655af5d75c9bb1" Nov 25 07:04:37 crc kubenswrapper[4482]: I1125 07:04:37.711901 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:04:37 crc kubenswrapper[4482]: W1125 07:04:37.738623 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod923dd3f7_190f_4715_a057_3eb83c260918.slice/crio-2c8609885d8ab2e22021093b9ff4211ccc65a987c21a386aa90e7ceec5a2a268 WatchSource:0}: Error finding container 2c8609885d8ab2e22021093b9ff4211ccc65a987c21a386aa90e7ceec5a2a268: Status 404 returned error can't find the container with id 2c8609885d8ab2e22021093b9ff4211ccc65a987c21a386aa90e7ceec5a2a268 Nov 25 07:04:37 crc kubenswrapper[4482]: I1125 07:04:37.822036 4482 scope.go:117] "RemoveContainer" containerID="40be42855bfac49bec1255396dd5e074aaecd8d028edf160e46ceab36f50c2dd" Nov 25 07:04:38 crc kubenswrapper[4482]: I1125 07:04:38.595395 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923dd3f7-190f-4715-a057-3eb83c260918","Type":"ContainerStarted","Data":"2c8609885d8ab2e22021093b9ff4211ccc65a987c21a386aa90e7ceec5a2a268"} Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.614623 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6","Type":"ContainerStarted","Data":"e15d5775f5e089684b8c86af1ec76b4df36e9bec87b65ed9c5da196a74ecf658"} Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.616701 4482 generic.go:334] "Generic (PLEG): container finished" podID="f9112227-4108-4545-b5ae-d9e3a5d79faa" containerID="56ec6c37c289987e62077f7231b6789860e37b8e710778c34d4471b6f052fc24" exitCode=0 Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.616750 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" event={"ID":"f9112227-4108-4545-b5ae-d9e3a5d79faa","Type":"ContainerDied","Data":"56ec6c37c289987e62077f7231b6789860e37b8e710778c34d4471b6f052fc24"} Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.622791 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ac4e9f57-0830-4b4e-9544-6f38309646f7","Type":"ContainerStarted","Data":"5385fcb3a9a25f01392a3bc9fa665caadb0ae311b08c7af6a9d4f2938c4291ad"} Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.625535 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"278924b6-38eb-418e-87b6-be1872ee5464","Type":"ContainerStarted","Data":"8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d"} Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.628056 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cfr4t" event={"ID":"cda0ef98-7b63-4531-8655-a537323394a7","Type":"ContainerStarted","Data":"92238d549d3d1f1b3a9886cd6ca519323cb68b4f3696e31974afa748a8ab2ab7"} Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.633601 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6fccbbd848-gp8qx" event={"ID":"5bda1dfd-9f8b-4fbd-8093-689b7afada79","Type":"ContainerStarted","Data":"08d1da05c3910796afa7506712e18f571090b9d1e1d10ddfdc0f55109287b8c3"} Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.637780 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.646325 4482 generic.go:334] "Generic (PLEG): container finished" podID="b0810e3e-ce88-42f5-a47d-8e101088577b" containerID="27491c1d259302264c678c7be4fb50945c1d83edfbc821b65828de75f2990925" exitCode=0 Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.646732 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759996464c-vrqp9" event={"ID":"b0810e3e-ce88-42f5-a47d-8e101088577b","Type":"ContainerDied","Data":"27491c1d259302264c678c7be4fb50945c1d83edfbc821b65828de75f2990925"} Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.649649 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-94697d564-bgxtg" event={"ID":"a0d2c911-b73a-4216-a6c2-5642b7083f37","Type":"ContainerStarted","Data":"6f0eb32e27b00b1bb4b4e56db82f2d5f9db4cb95ed1e39698c381d9db04f2e85"} Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.650668 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.658453 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6fccbbd848-gp8qx" podStartSLOduration=35.658441479 podStartE2EDuration="35.658441479s" podCreationTimestamp="2025-11-25 07:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:04:39.654011066 +0000 UTC m=+1054.142242325" watchObservedRunningTime="2025-11-25 07:04:39.658441479 +0000 UTC m=+1054.146672737" Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.725295 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-cfr4t" podStartSLOduration=8.711287646 podStartE2EDuration="52.72527529s" podCreationTimestamp="2025-11-25 07:03:47 +0000 UTC" firstStartedPulling="2025-11-25 07:03:53.335493883 +0000 UTC m=+1007.823725142" lastFinishedPulling="2025-11-25 07:04:37.349481527 +0000 UTC m=+1051.837712786" observedRunningTime="2025-11-25 07:04:39.701126055 +0000 UTC m=+1054.189357314" watchObservedRunningTime="2025-11-25 07:04:39.72527529 +0000 UTC m=+1054.213506549" Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.739050 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=8.444949196 podStartE2EDuration="1m9.739032505s" podCreationTimestamp="2025-11-25 07:03:30 +0000 UTC" firstStartedPulling="2025-11-25 07:03:35.770129862 +0000 UTC m=+990.258361122" lastFinishedPulling="2025-11-25 07:04:37.064213172 +0000 UTC m=+1051.552444431" observedRunningTime="2025-11-25 07:04:39.710830139 +0000 UTC m=+1054.199061398" watchObservedRunningTime="2025-11-25 07:04:39.739032505 +0000 UTC m=+1054.227263764" Nov 25 07:04:39 crc kubenswrapper[4482]: I1125 07:04:39.787874 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-94697d564-bgxtg" podStartSLOduration=28.787831358 podStartE2EDuration="28.787831358s" podCreationTimestamp="2025-11-25 07:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:04:39.762782808 +0000 UTC m=+1054.251014067" watchObservedRunningTime="2025-11-25 07:04:39.787831358 +0000 UTC m=+1054.276062617" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.255751 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.300991 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4w6s\" (UniqueName: \"kubernetes.io/projected/b0810e3e-ce88-42f5-a47d-8e101088577b-kube-api-access-l4w6s\") pod \"b0810e3e-ce88-42f5-a47d-8e101088577b\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.301030 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-dns-svc\") pod \"b0810e3e-ce88-42f5-a47d-8e101088577b\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.301312 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-config\") pod \"b0810e3e-ce88-42f5-a47d-8e101088577b\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.301395 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-ovsdbserver-sb\") pod \"b0810e3e-ce88-42f5-a47d-8e101088577b\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.301466 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-dns-swift-storage-0\") pod \"b0810e3e-ce88-42f5-a47d-8e101088577b\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.301509 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-ovsdbserver-nb\") pod \"b0810e3e-ce88-42f5-a47d-8e101088577b\" (UID: \"b0810e3e-ce88-42f5-a47d-8e101088577b\") " Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.319315 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0810e3e-ce88-42f5-a47d-8e101088577b-kube-api-access-l4w6s" (OuterVolumeSpecName: "kube-api-access-l4w6s") pod "b0810e3e-ce88-42f5-a47d-8e101088577b" (UID: "b0810e3e-ce88-42f5-a47d-8e101088577b"). InnerVolumeSpecName "kube-api-access-l4w6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.326004 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-config" (OuterVolumeSpecName: "config") pod "b0810e3e-ce88-42f5-a47d-8e101088577b" (UID: "b0810e3e-ce88-42f5-a47d-8e101088577b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.329970 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b0810e3e-ce88-42f5-a47d-8e101088577b" (UID: "b0810e3e-ce88-42f5-a47d-8e101088577b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.331536 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b0810e3e-ce88-42f5-a47d-8e101088577b" (UID: "b0810e3e-ce88-42f5-a47d-8e101088577b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.333847 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b0810e3e-ce88-42f5-a47d-8e101088577b" (UID: "b0810e3e-ce88-42f5-a47d-8e101088577b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.347819 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b0810e3e-ce88-42f5-a47d-8e101088577b" (UID: "b0810e3e-ce88-42f5-a47d-8e101088577b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.404537 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.404584 4482 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.404596 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.404605 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4w6s\" (UniqueName: \"kubernetes.io/projected/b0810e3e-ce88-42f5-a47d-8e101088577b-kube-api-access-l4w6s\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.404615 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.404643 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0810e3e-ce88-42f5-a47d-8e101088577b-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.594683 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fbb9df54d-nfljm" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.735955 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-759996464c-vrqp9" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.736297 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-759996464c-vrqp9" event={"ID":"b0810e3e-ce88-42f5-a47d-8e101088577b","Type":"ContainerDied","Data":"f7e466144c93212055b680387ec868cc9d40c70999a98254dc9933b2a42d2aa6"} Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.736368 4482 scope.go:117] "RemoveContainer" containerID="27491c1d259302264c678c7be4fb50945c1d83edfbc821b65828de75f2990925" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.743685 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.757623 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6bf74b5bc8-nqmwd" event={"ID":"fc2d466d-9429-472d-b1a4-cccf7da7f5fc","Type":"ContainerStarted","Data":"935eccb7497ddade043979cbe04e124f86705735783e2ec575f5449afd26375c"} Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.757777 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.768481 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" podStartSLOduration=31.768467889 podStartE2EDuration="31.768467889s" podCreationTimestamp="2025-11-25 07:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:04:41.759105929 +0000 UTC m=+1056.247337189" watchObservedRunningTime="2025-11-25 07:04:41.768467889 +0000 UTC m=+1056.256699147" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.789207 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6bf74b5bc8-nqmwd" podStartSLOduration=26.328872005 podStartE2EDuration="30.789194411s" podCreationTimestamp="2025-11-25 07:04:11 +0000 UTC" firstStartedPulling="2025-11-25 07:04:36.799614772 +0000 UTC m=+1051.287846031" lastFinishedPulling="2025-11-25 07:04:41.259937178 +0000 UTC m=+1055.748168437" observedRunningTime="2025-11-25 07:04:41.787832664 +0000 UTC m=+1056.276063913" watchObservedRunningTime="2025-11-25 07:04:41.789194411 +0000 UTC m=+1056.277425670" Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.871235 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-759996464c-vrqp9"] Nov 25 07:04:41 crc kubenswrapper[4482]: I1125 07:04:41.896803 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-759996464c-vrqp9"] Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.765461 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-dddd66fdc-jvpm8" event={"ID":"f8e32069-3248-4216-a894-0ea4558d88f9","Type":"ContainerStarted","Data":"d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260"} Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.765815 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-dddd66fdc-jvpm8" podUID="f8e32069-3248-4216-a894-0ea4558d88f9" containerName="heat-api" containerID="cri-o://d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260" gracePeriod=60 Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.765888 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.774203 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ggvxs" event={"ID":"6f1385f6-5258-4372-a20a-30a7229ec2e8","Type":"ContainerStarted","Data":"01c7d5ff0000392ead9d789749415cd0ef192c17b400db44e5603e6a3540cb56"} Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.782280 4482 generic.go:334] "Generic (PLEG): container finished" podID="5c662f2a-8694-4f15-8e15-edadbbdaa093" containerID="3911274db5f50738175ce17813b81be08261a6bb8e9ca1314055a4659636b0ad" exitCode=1 Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.782356 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8549f976cf-6szl5" event={"ID":"5c662f2a-8694-4f15-8e15-edadbbdaa093","Type":"ContainerDied","Data":"3911274db5f50738175ce17813b81be08261a6bb8e9ca1314055a4659636b0ad"} Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.782616 4482 scope.go:117] "RemoveContainer" containerID="3911274db5f50738175ce17813b81be08261a6bb8e9ca1314055a4659636b0ad" Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.784270 4482 generic.go:334] "Generic (PLEG): container finished" podID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" containerID="935eccb7497ddade043979cbe04e124f86705735783e2ec575f5449afd26375c" exitCode=1 Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.784314 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6bf74b5bc8-nqmwd" event={"ID":"fc2d466d-9429-472d-b1a4-cccf7da7f5fc","Type":"ContainerDied","Data":"935eccb7497ddade043979cbe04e124f86705735783e2ec575f5449afd26375c"} Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.784594 4482 scope.go:117] "RemoveContainer" containerID="935eccb7497ddade043979cbe04e124f86705735783e2ec575f5449afd26375c" Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.794453 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-dddd66fdc-jvpm8" podStartSLOduration=30.338723727 podStartE2EDuration="38.794442681s" podCreationTimestamp="2025-11-25 07:04:04 +0000 UTC" firstStartedPulling="2025-11-25 07:04:32.803444967 +0000 UTC m=+1047.291676226" lastFinishedPulling="2025-11-25 07:04:41.25916392 +0000 UTC m=+1055.747395180" observedRunningTime="2025-11-25 07:04:42.788916814 +0000 UTC m=+1057.277148073" watchObservedRunningTime="2025-11-25 07:04:42.794442681 +0000 UTC m=+1057.282673931" Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.801622 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"278924b6-38eb-418e-87b6-be1872ee5464","Type":"ContainerStarted","Data":"a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0"} Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.801736 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="278924b6-38eb-418e-87b6-be1872ee5464" containerName="glance-log" containerID="cri-o://8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d" gracePeriod=30 Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.801830 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="278924b6-38eb-418e-87b6-be1872ee5464" containerName="glance-httpd" containerID="cri-o://a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0" gracePeriod=30 Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.834359 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923dd3f7-190f-4715-a057-3eb83c260918","Type":"ContainerStarted","Data":"cc47653245d4c8b1f9dab090cfd50b473a9a2fbfab4c880d9f8c960e5b7e5530"} Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.845435 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55c7dc97f5-ffnl6" event={"ID":"2db5521c-32ce-484e-a9a8-6481deedd275","Type":"ContainerStarted","Data":"7e42bfeebeb1d004339d86e158c64017eaee048bed0ffe7e599f939fa4a9830d"} Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.845869 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.862742 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6","Type":"ContainerStarted","Data":"24f436aaa981ba58e94969feb77f32dde819f57a72783e513c714c4cacc44e1a"} Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.862837 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" containerName="glance-httpd" containerID="cri-o://24f436aaa981ba58e94969feb77f32dde819f57a72783e513c714c4cacc44e1a" gracePeriod=30 Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.862808 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" containerName="glance-log" containerID="cri-o://e15d5775f5e089684b8c86af1ec76b4df36e9bec87b65ed9c5da196a74ecf658" gracePeriod=30 Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.874784 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" event={"ID":"f9112227-4108-4545-b5ae-d9e3a5d79faa","Type":"ContainerStarted","Data":"b975e2827d4e4e4721beba49b9653b5225fb454df16327c9f65e2d2922e595d3"} Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.883372 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" event={"ID":"33132915-ebcf-4d71-83af-26542eb68ac6","Type":"ContainerStarted","Data":"8da9d98f9154a3b5092cc70d78f99cd4851493b30b05d6fba9d006560d32e159"} Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.883540 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.885711 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f98797bb6-chb76" event={"ID":"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2","Type":"ContainerStarted","Data":"0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad"} Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.885898 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6f98797bb6-chb76" podUID="59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2" containerName="heat-cfnapi" containerID="cri-o://0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad" gracePeriod=60 Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.886156 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.901515 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-ggvxs" podStartSLOduration=3.463109396 podStartE2EDuration="2m1.901499761s" podCreationTimestamp="2025-11-25 07:02:41 +0000 UTC" firstStartedPulling="2025-11-25 07:02:42.821621775 +0000 UTC m=+937.309853034" lastFinishedPulling="2025-11-25 07:04:41.26001214 +0000 UTC m=+1055.748243399" observedRunningTime="2025-11-25 07:04:42.878288533 +0000 UTC m=+1057.366519793" watchObservedRunningTime="2025-11-25 07:04:42.901499761 +0000 UTC m=+1057.389731020" Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.903154 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-55c7dc97f5-ffnl6" podStartSLOduration=21.447706316 podStartE2EDuration="29.90314921s" podCreationTimestamp="2025-11-25 07:04:13 +0000 UTC" firstStartedPulling="2025-11-25 07:04:32.802516587 +0000 UTC m=+1047.290747845" lastFinishedPulling="2025-11-25 07:04:41.25795948 +0000 UTC m=+1055.746190739" observedRunningTime="2025-11-25 07:04:42.897985526 +0000 UTC m=+1057.386216785" watchObservedRunningTime="2025-11-25 07:04:42.90314921 +0000 UTC m=+1057.391380469" Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.936582 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=32.936565535 podStartE2EDuration="32.936565535s" podCreationTimestamp="2025-11-25 07:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:04:42.919869861 +0000 UTC m=+1057.408101120" watchObservedRunningTime="2025-11-25 07:04:42.936565535 +0000 UTC m=+1057.424796784" Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.972009 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=32.971981079 podStartE2EDuration="32.971981079s" podCreationTimestamp="2025-11-25 07:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:04:42.948976992 +0000 UTC m=+1057.437208251" watchObservedRunningTime="2025-11-25 07:04:42.971981079 +0000 UTC m=+1057.460212338" Nov 25 07:04:42 crc kubenswrapper[4482]: I1125 07:04:42.993952 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6f98797bb6-chb76" podStartSLOduration=30.452240435 podStartE2EDuration="38.993929365s" podCreationTimestamp="2025-11-25 07:04:04 +0000 UTC" firstStartedPulling="2025-11-25 07:04:32.802565609 +0000 UTC m=+1047.290796858" lastFinishedPulling="2025-11-25 07:04:41.344254529 +0000 UTC m=+1055.832485788" observedRunningTime="2025-11-25 07:04:42.980562346 +0000 UTC m=+1057.468793605" watchObservedRunningTime="2025-11-25 07:04:42.993929365 +0000 UTC m=+1057.482160624" Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.020245 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" podStartSLOduration=25.728856405 podStartE2EDuration="30.020224125s" podCreationTimestamp="2025-11-25 07:04:13 +0000 UTC" firstStartedPulling="2025-11-25 07:04:37.05098254 +0000 UTC m=+1051.539213799" lastFinishedPulling="2025-11-25 07:04:41.34235026 +0000 UTC m=+1055.830581519" observedRunningTime="2025-11-25 07:04:43.006054402 +0000 UTC m=+1057.494285651" watchObservedRunningTime="2025-11-25 07:04:43.020224125 +0000 UTC m=+1057.508455383" Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.843038 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0810e3e-ce88-42f5-a47d-8e101088577b" path="/var/lib/kubelet/pods/b0810e3e-ce88-42f5-a47d-8e101088577b/volumes" Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.898721 4482 generic.go:334] "Generic (PLEG): container finished" podID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" containerID="13b61690e842970ca6ad1e39bc48fb05fc884f74fb7e5a3fa6384fd47cdc4ba3" exitCode=1 Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.898791 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6bf74b5bc8-nqmwd" event={"ID":"fc2d466d-9429-472d-b1a4-cccf7da7f5fc","Type":"ContainerDied","Data":"13b61690e842970ca6ad1e39bc48fb05fc884f74fb7e5a3fa6384fd47cdc4ba3"} Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.898868 4482 scope.go:117] "RemoveContainer" containerID="935eccb7497ddade043979cbe04e124f86705735783e2ec575f5449afd26375c" Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.899380 4482 scope.go:117] "RemoveContainer" containerID="13b61690e842970ca6ad1e39bc48fb05fc884f74fb7e5a3fa6384fd47cdc4ba3" Nov 25 07:04:43 crc kubenswrapper[4482]: E1125 07:04:43.899589 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6bf74b5bc8-nqmwd_openstack(fc2d466d-9429-472d-b1a4-cccf7da7f5fc)\"" pod="openstack/heat-api-6bf74b5bc8-nqmwd" podUID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.904021 4482 generic.go:334] "Generic (PLEG): container finished" podID="278924b6-38eb-418e-87b6-be1872ee5464" containerID="8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d" exitCode=143 Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.904107 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"278924b6-38eb-418e-87b6-be1872ee5464","Type":"ContainerDied","Data":"8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d"} Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.906341 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923dd3f7-190f-4715-a057-3eb83c260918","Type":"ContainerStarted","Data":"8ba44be81aca99bb30c5ed8b31eb8609112c090f0d1d0fe91c2b6c395d0ee672"} Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.906384 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923dd3f7-190f-4715-a057-3eb83c260918","Type":"ContainerStarted","Data":"862b576c7d68825f91daaa8384fd3fd1f4032f205a1608bcd6f78f293b8d4c23"} Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.908346 4482 generic.go:334] "Generic (PLEG): container finished" podID="abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" containerID="e15d5775f5e089684b8c86af1ec76b4df36e9bec87b65ed9c5da196a74ecf658" exitCode=143 Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.908396 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6","Type":"ContainerDied","Data":"e15d5775f5e089684b8c86af1ec76b4df36e9bec87b65ed9c5da196a74ecf658"} Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.911430 4482 generic.go:334] "Generic (PLEG): container finished" podID="5c662f2a-8694-4f15-8e15-edadbbdaa093" containerID="3ddc44c8f4e7d1ddead2b846947f816c3c7b220ffbb1e68a889ee516741bddbd" exitCode=1 Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.912294 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8549f976cf-6szl5" event={"ID":"5c662f2a-8694-4f15-8e15-edadbbdaa093","Type":"ContainerDied","Data":"3ddc44c8f4e7d1ddead2b846947f816c3c7b220ffbb1e68a889ee516741bddbd"} Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.912678 4482 scope.go:117] "RemoveContainer" containerID="3ddc44c8f4e7d1ddead2b846947f816c3c7b220ffbb1e68a889ee516741bddbd" Nov 25 07:04:43 crc kubenswrapper[4482]: E1125 07:04:43.912893 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8549f976cf-6szl5_openstack(5c662f2a-8694-4f15-8e15-edadbbdaa093)\"" pod="openstack/heat-cfnapi-8549f976cf-6szl5" podUID="5c662f2a-8694-4f15-8e15-edadbbdaa093" Nov 25 07:04:43 crc kubenswrapper[4482]: I1125 07:04:43.964330 4482 scope.go:117] "RemoveContainer" containerID="3911274db5f50738175ce17813b81be08261a6bb8e9ca1314055a4659636b0ad" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.685538 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.838527 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.848776 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-config-data-custom\") pod \"f8e32069-3248-4216-a894-0ea4558d88f9\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.848963 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbcjt\" (UniqueName: \"kubernetes.io/projected/f8e32069-3248-4216-a894-0ea4558d88f9-kube-api-access-jbcjt\") pod \"f8e32069-3248-4216-a894-0ea4558d88f9\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.849797 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-config-data\") pod \"f8e32069-3248-4216-a894-0ea4558d88f9\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.849895 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-combined-ca-bundle\") pod \"f8e32069-3248-4216-a894-0ea4558d88f9\" (UID: \"f8e32069-3248-4216-a894-0ea4558d88f9\") " Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.893220 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e32069-3248-4216-a894-0ea4558d88f9-kube-api-access-jbcjt" (OuterVolumeSpecName: "kube-api-access-jbcjt") pod "f8e32069-3248-4216-a894-0ea4558d88f9" (UID: "f8e32069-3248-4216-a894-0ea4558d88f9"). InnerVolumeSpecName "kube-api-access-jbcjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.894155 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f8e32069-3248-4216-a894-0ea4558d88f9" (UID: "f8e32069-3248-4216-a894-0ea4558d88f9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.906969 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8e32069-3248-4216-a894-0ea4558d88f9" (UID: "f8e32069-3248-4216-a894-0ea4558d88f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.937547 4482 generic.go:334] "Generic (PLEG): container finished" podID="278924b6-38eb-418e-87b6-be1872ee5464" containerID="a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0" exitCode=0 Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.937619 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"278924b6-38eb-418e-87b6-be1872ee5464","Type":"ContainerDied","Data":"a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0"} Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.937664 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"278924b6-38eb-418e-87b6-be1872ee5464","Type":"ContainerDied","Data":"500c0aae1a7ff13de680ecd1ac68a4f5e8e8a5d0348ec1bbec5e2db206d1b578"} Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.937598 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.937715 4482 scope.go:117] "RemoveContainer" containerID="a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.946378 4482 generic.go:334] "Generic (PLEG): container finished" podID="f8e32069-3248-4216-a894-0ea4558d88f9" containerID="d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260" exitCode=0 Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.946462 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-dddd66fdc-jvpm8" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.946543 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-dddd66fdc-jvpm8" event={"ID":"f8e32069-3248-4216-a894-0ea4558d88f9","Type":"ContainerDied","Data":"d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260"} Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.946619 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-dddd66fdc-jvpm8" event={"ID":"f8e32069-3248-4216-a894-0ea4558d88f9","Type":"ContainerDied","Data":"d190adaab1cdb3d7cc705153204f45f896f6d349161027e68b037c812e52c8ba"} Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.951156 4482 generic.go:334] "Generic (PLEG): container finished" podID="abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" containerID="24f436aaa981ba58e94969feb77f32dde819f57a72783e513c714c4cacc44e1a" exitCode=0 Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.951246 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6","Type":"ContainerDied","Data":"24f436aaa981ba58e94969feb77f32dde819f57a72783e513c714c4cacc44e1a"} Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.952575 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-config-data\") pod \"278924b6-38eb-418e-87b6-be1872ee5464\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.952748 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/278924b6-38eb-418e-87b6-be1872ee5464-httpd-run\") pod \"278924b6-38eb-418e-87b6-be1872ee5464\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.952820 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/278924b6-38eb-418e-87b6-be1872ee5464-logs\") pod \"278924b6-38eb-418e-87b6-be1872ee5464\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.952877 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-combined-ca-bundle\") pod \"278924b6-38eb-418e-87b6-be1872ee5464\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.952933 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"278924b6-38eb-418e-87b6-be1872ee5464\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.952977 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntf4x\" (UniqueName: \"kubernetes.io/projected/278924b6-38eb-418e-87b6-be1872ee5464-kube-api-access-ntf4x\") pod \"278924b6-38eb-418e-87b6-be1872ee5464\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.953022 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-scripts\") pod \"278924b6-38eb-418e-87b6-be1872ee5464\" (UID: \"278924b6-38eb-418e-87b6-be1872ee5464\") " Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.953405 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/278924b6-38eb-418e-87b6-be1872ee5464-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "278924b6-38eb-418e-87b6-be1872ee5464" (UID: "278924b6-38eb-418e-87b6-be1872ee5464"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.953661 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/278924b6-38eb-418e-87b6-be1872ee5464-logs" (OuterVolumeSpecName: "logs") pod "278924b6-38eb-418e-87b6-be1872ee5464" (UID: "278924b6-38eb-418e-87b6-be1872ee5464"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.954145 4482 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/278924b6-38eb-418e-87b6-be1872ee5464-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.954237 4482 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.954293 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/278924b6-38eb-418e-87b6-be1872ee5464-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.954347 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbcjt\" (UniqueName: \"kubernetes.io/projected/f8e32069-3248-4216-a894-0ea4558d88f9-kube-api-access-jbcjt\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.954393 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.955527 4482 scope.go:117] "RemoveContainer" containerID="3ddc44c8f4e7d1ddead2b846947f816c3c7b220ffbb1e68a889ee516741bddbd" Nov 25 07:04:44 crc kubenswrapper[4482]: E1125 07:04:44.955799 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8549f976cf-6szl5_openstack(5c662f2a-8694-4f15-8e15-edadbbdaa093)\"" pod="openstack/heat-cfnapi-8549f976cf-6szl5" podUID="5c662f2a-8694-4f15-8e15-edadbbdaa093" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.962657 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278924b6-38eb-418e-87b6-be1872ee5464-kube-api-access-ntf4x" (OuterVolumeSpecName: "kube-api-access-ntf4x") pod "278924b6-38eb-418e-87b6-be1872ee5464" (UID: "278924b6-38eb-418e-87b6-be1872ee5464"). InnerVolumeSpecName "kube-api-access-ntf4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.964682 4482 scope.go:117] "RemoveContainer" containerID="13b61690e842970ca6ad1e39bc48fb05fc884f74fb7e5a3fa6384fd47cdc4ba3" Nov 25 07:04:44 crc kubenswrapper[4482]: E1125 07:04:44.964982 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6bf74b5bc8-nqmwd_openstack(fc2d466d-9429-472d-b1a4-cccf7da7f5fc)\"" pod="openstack/heat-api-6bf74b5bc8-nqmwd" podUID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.969968 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "278924b6-38eb-418e-87b6-be1872ee5464" (UID: "278924b6-38eb-418e-87b6-be1872ee5464"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.976431 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-scripts" (OuterVolumeSpecName: "scripts") pod "278924b6-38eb-418e-87b6-be1872ee5464" (UID: "278924b6-38eb-418e-87b6-be1872ee5464"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:44 crc kubenswrapper[4482]: I1125 07:04:44.980135 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-config-data" (OuterVolumeSpecName: "config-data") pod "f8e32069-3248-4216-a894-0ea4558d88f9" (UID: "f8e32069-3248-4216-a894-0ea4558d88f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.015659 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "278924b6-38eb-418e-87b6-be1872ee5464" (UID: "278924b6-38eb-418e-87b6-be1872ee5464"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.027551 4482 scope.go:117] "RemoveContainer" containerID="8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.030902 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-config-data" (OuterVolumeSpecName: "config-data") pod "278924b6-38eb-418e-87b6-be1872ee5464" (UID: "278924b6-38eb-418e-87b6-be1872ee5464"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.043458 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.051268 4482 scope.go:117] "RemoveContainer" containerID="a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.057002 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.057043 4482 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 25 07:04:45 crc kubenswrapper[4482]: E1125 07:04:45.057691 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0\": container with ID starting with a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0 not found: ID does not exist" containerID="a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.057741 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0"} err="failed to get container status \"a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0\": rpc error: code = NotFound desc = could not find container \"a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0\": container with ID starting with a66e7b0a63ecfc39ec70f96736974092e34ead88fd88cc4d1c0aed390011b9c0 not found: ID does not exist" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.057767 4482 scope.go:117] "RemoveContainer" containerID="8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d" Nov 25 07:04:45 crc kubenswrapper[4482]: E1125 07:04:45.058164 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d\": container with ID starting with 8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d not found: ID does not exist" containerID="8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.058203 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d"} err="failed to get container status \"8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d\": rpc error: code = NotFound desc = could not find container \"8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d\": container with ID starting with 8f1d3b24346c2505d2a113b8bea1d426e5f49f0f96b408e99e4b3327cef2059d not found: ID does not exist" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.058220 4482 scope.go:117] "RemoveContainer" containerID="d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.059020 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntf4x\" (UniqueName: \"kubernetes.io/projected/278924b6-38eb-418e-87b6-be1872ee5464-kube-api-access-ntf4x\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.059040 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.059051 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e32069-3248-4216-a894-0ea4558d88f9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.059063 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278924b6-38eb-418e-87b6-be1872ee5464-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.084232 4482 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.091105 4482 scope.go:117] "RemoveContainer" containerID="d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260" Nov 25 07:04:45 crc kubenswrapper[4482]: E1125 07:04:45.091564 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260\": container with ID starting with d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260 not found: ID does not exist" containerID="d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.091594 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260"} err="failed to get container status \"d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260\": rpc error: code = NotFound desc = could not find container \"d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260\": container with ID starting with d1d40b20f9f1d226a9640aa607b72f3e08019f3aae87ed03690a378bf7abf260 not found: ID does not exist" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.160231 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-logs\") pod \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.160379 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26825\" (UniqueName: \"kubernetes.io/projected/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-kube-api-access-26825\") pod \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.160452 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-httpd-run\") pod \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.160482 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-combined-ca-bundle\") pod \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.160686 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-scripts\") pod \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.160790 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.160860 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-config-data\") pod \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\" (UID: \"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6\") " Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.161436 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-logs" (OuterVolumeSpecName: "logs") pod "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" (UID: "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.161627 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" (UID: "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.163252 4482 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.164676 4482 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.165431 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.167336 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-kube-api-access-26825" (OuterVolumeSpecName: "kube-api-access-26825") pod "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" (UID: "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6"). InnerVolumeSpecName "kube-api-access-26825". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.174263 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-scripts" (OuterVolumeSpecName: "scripts") pod "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" (UID: "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.174647 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" (UID: "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.195098 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" (UID: "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.266980 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26825\" (UniqueName: \"kubernetes.io/projected/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-kube-api-access-26825\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.267081 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.267133 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.267268 4482 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.288257 4482 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.306340 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-config-data" (OuterVolumeSpecName: "config-data") pod "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" (UID: "abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.369317 4482 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.369352 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.399376 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.403730 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.413217 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-dddd66fdc-jvpm8"] Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.417772 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-dddd66fdc-jvpm8"] Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424148 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 07:04:45 crc kubenswrapper[4482]: E1125 07:04:45.424533 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e32069-3248-4216-a894-0ea4558d88f9" containerName="heat-api" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424552 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e32069-3248-4216-a894-0ea4558d88f9" containerName="heat-api" Nov 25 07:04:45 crc kubenswrapper[4482]: E1125 07:04:45.424578 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0810e3e-ce88-42f5-a47d-8e101088577b" containerName="init" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424584 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0810e3e-ce88-42f5-a47d-8e101088577b" containerName="init" Nov 25 07:04:45 crc kubenswrapper[4482]: E1125 07:04:45.424594 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278924b6-38eb-418e-87b6-be1872ee5464" containerName="glance-log" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424600 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="278924b6-38eb-418e-87b6-be1872ee5464" containerName="glance-log" Nov 25 07:04:45 crc kubenswrapper[4482]: E1125 07:04:45.424610 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" containerName="glance-log" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424615 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" containerName="glance-log" Nov 25 07:04:45 crc kubenswrapper[4482]: E1125 07:04:45.424622 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278924b6-38eb-418e-87b6-be1872ee5464" containerName="glance-httpd" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424627 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="278924b6-38eb-418e-87b6-be1872ee5464" containerName="glance-httpd" Nov 25 07:04:45 crc kubenswrapper[4482]: E1125 07:04:45.424638 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" containerName="glance-httpd" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424642 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" containerName="glance-httpd" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424820 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0810e3e-ce88-42f5-a47d-8e101088577b" containerName="init" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424832 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" containerName="glance-httpd" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424841 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="278924b6-38eb-418e-87b6-be1872ee5464" containerName="glance-httpd" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424849 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="278924b6-38eb-418e-87b6-be1872ee5464" containerName="glance-log" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424857 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e32069-3248-4216-a894-0ea4558d88f9" containerName="heat-api" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.424866 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" containerName="glance-log" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.425924 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.428371 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.428575 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.483524 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.573432 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9be1fb02-b896-4752-93f5-df9f22a09473-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.573491 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.573544 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98bfm\" (UniqueName: \"kubernetes.io/projected/9be1fb02-b896-4752-93f5-df9f22a09473-kube-api-access-98bfm\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.573615 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be1fb02-b896-4752-93f5-df9f22a09473-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.573642 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be1fb02-b896-4752-93f5-df9f22a09473-logs\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.574008 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be1fb02-b896-4752-93f5-df9f22a09473-scripts\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.574118 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be1fb02-b896-4752-93f5-df9f22a09473-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.574274 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be1fb02-b896-4752-93f5-df9f22a09473-config-data\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.676626 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9be1fb02-b896-4752-93f5-df9f22a09473-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.676675 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.676735 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98bfm\" (UniqueName: \"kubernetes.io/projected/9be1fb02-b896-4752-93f5-df9f22a09473-kube-api-access-98bfm\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.676790 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be1fb02-b896-4752-93f5-df9f22a09473-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.676817 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be1fb02-b896-4752-93f5-df9f22a09473-logs\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.676914 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be1fb02-b896-4752-93f5-df9f22a09473-scripts\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.676961 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be1fb02-b896-4752-93f5-df9f22a09473-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.677004 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be1fb02-b896-4752-93f5-df9f22a09473-config-data\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.678228 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.678454 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9be1fb02-b896-4752-93f5-df9f22a09473-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.678747 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be1fb02-b896-4752-93f5-df9f22a09473-logs\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.683580 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be1fb02-b896-4752-93f5-df9f22a09473-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.685016 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be1fb02-b896-4752-93f5-df9f22a09473-scripts\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.686853 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be1fb02-b896-4752-93f5-df9f22a09473-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.688622 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be1fb02-b896-4752-93f5-df9f22a09473-config-data\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.699305 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98bfm\" (UniqueName: \"kubernetes.io/projected/9be1fb02-b896-4752-93f5-df9f22a09473-kube-api-access-98bfm\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.715592 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"9be1fb02-b896-4752-93f5-df9f22a09473\") " pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.740603 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.844794 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="278924b6-38eb-418e-87b6-be1872ee5464" path="/var/lib/kubelet/pods/278924b6-38eb-418e-87b6-be1872ee5464/volumes" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.845550 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e32069-3248-4216-a894-0ea4558d88f9" path="/var/lib/kubelet/pods/f8e32069-3248-4216-a894-0ea4558d88f9/volumes" Nov 25 07:04:45 crc kubenswrapper[4482]: I1125 07:04:45.998872 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923dd3f7-190f-4715-a057-3eb83c260918","Type":"ContainerStarted","Data":"138e7b3fc78c7397997119aaff6facabe368ec544e7104fe981d97473c78da72"} Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.000744 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.020337 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.020348 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6","Type":"ContainerDied","Data":"585b321496fa83b5ddffbcfceea8ff7f168693a4c04fae67bd76f3ecc129d84b"} Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.020393 4482 scope.go:117] "RemoveContainer" containerID="24f436aaa981ba58e94969feb77f32dde819f57a72783e513c714c4cacc44e1a" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.036983 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.729646113 podStartE2EDuration="13.036958992s" podCreationTimestamp="2025-11-25 07:04:33 +0000 UTC" firstStartedPulling="2025-11-25 07:04:37.743980139 +0000 UTC m=+1052.232211397" lastFinishedPulling="2025-11-25 07:04:45.051293017 +0000 UTC m=+1059.539524276" observedRunningTime="2025-11-25 07:04:46.034296753 +0000 UTC m=+1060.522528013" watchObservedRunningTime="2025-11-25 07:04:46.036958992 +0000 UTC m=+1060.525190251" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.053687 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.066606 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.068509 4482 scope.go:117] "RemoveContainer" containerID="e15d5775f5e089684b8c86af1ec76b4df36e9bec87b65ed9c5da196a74ecf658" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.077389 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.083100 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.090263 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.100482 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.104347 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.116114 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.116193 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdxsn\" (UniqueName: \"kubernetes.io/projected/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-kube-api-access-tdxsn\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.116215 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.116254 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.116445 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.116487 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.116581 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.116749 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.219114 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.219151 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdxsn\" (UniqueName: \"kubernetes.io/projected/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-kube-api-access-tdxsn\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.219220 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.219240 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.219876 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.220203 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.220236 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.220257 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.220320 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.220508 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.222531 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.234627 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.235051 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.235644 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.239087 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.239381 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdxsn\" (UniqueName: \"kubernetes.io/projected/b96268f7-8545-43f1-a1d2-5fe1f00a28f9-kube-api-access-tdxsn\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.251348 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"b96268f7-8545-43f1-a1d2-5fe1f00a28f9\") " pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.272832 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.418873 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.565459 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.565516 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.566829 4482 scope.go:117] "RemoveContainer" containerID="3ddc44c8f4e7d1ddead2b846947f816c3c7b220ffbb1e68a889ee516741bddbd" Nov 25 07:04:46 crc kubenswrapper[4482]: E1125 07:04:46.567438 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8549f976cf-6szl5_openstack(5c662f2a-8694-4f15-8e15-edadbbdaa093)\"" pod="openstack/heat-cfnapi-8549f976cf-6szl5" podUID="5c662f2a-8694-4f15-8e15-edadbbdaa093" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.758719 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.759865 4482 scope.go:117] "RemoveContainer" containerID="13b61690e842970ca6ad1e39bc48fb05fc884f74fb7e5a3fa6384fd47cdc4ba3" Nov 25 07:04:46 crc kubenswrapper[4482]: E1125 07:04:46.760075 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6bf74b5bc8-nqmwd_openstack(fc2d466d-9429-472d-b1a4-cccf7da7f5fc)\"" pod="openstack/heat-api-6bf74b5bc8-nqmwd" podUID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.760479 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.959210 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 07:04:46 crc kubenswrapper[4482]: I1125 07:04:46.959447 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:04:46 crc kubenswrapper[4482]: W1125 07:04:46.964637 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb96268f7_8545_43f1_a1d2_5fe1f00a28f9.slice/crio-deb889da34c652ce489502e27aa34bf2b45d5769cdbc6c9d0c7338afe1bff723 WatchSource:0}: Error finding container deb889da34c652ce489502e27aa34bf2b45d5769cdbc6c9d0c7338afe1bff723: Status 404 returned error can't find the container with id deb889da34c652ce489502e27aa34bf2b45d5769cdbc6c9d0c7338afe1bff723 Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.034409 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9be1fb02-b896-4752-93f5-df9f22a09473","Type":"ContainerStarted","Data":"b198e40771b59f9b072e003dd86c8c72f21bc501204cf9a44d0c85ef31dc5d62"} Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.034452 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9be1fb02-b896-4752-93f5-df9f22a09473","Type":"ContainerStarted","Data":"9021e810ffee7705d78048493cc2d4f483d80690b22ccd53b0380bfb6440317d"} Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.040023 4482 generic.go:334] "Generic (PLEG): container finished" podID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerID="bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912" exitCode=137 Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.040145 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fbb9df54d-nfljm" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.041424 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fbb9df54d-nfljm" event={"ID":"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db","Type":"ContainerDied","Data":"bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912"} Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.041457 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fbb9df54d-nfljm" event={"ID":"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db","Type":"ContainerDied","Data":"c5e4b61ec145d2e79cc4c53cef0936f15fef9b0980ae3d65522894c606220f22"} Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.041478 4482 scope.go:117] "RemoveContainer" containerID="b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.044774 4482 generic.go:334] "Generic (PLEG): container finished" podID="6f1385f6-5258-4372-a20a-30a7229ec2e8" containerID="01c7d5ff0000392ead9d789749415cd0ef192c17b400db44e5603e6a3540cb56" exitCode=0 Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.044818 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ggvxs" event={"ID":"6f1385f6-5258-4372-a20a-30a7229ec2e8","Type":"ContainerDied","Data":"01c7d5ff0000392ead9d789749415cd0ef192c17b400db44e5603e6a3540cb56"} Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.047041 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b96268f7-8545-43f1-a1d2-5fe1f00a28f9","Type":"ContainerStarted","Data":"deb889da34c652ce489502e27aa34bf2b45d5769cdbc6c9d0c7338afe1bff723"} Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.051615 4482 scope.go:117] "RemoveContainer" containerID="13b61690e842970ca6ad1e39bc48fb05fc884f74fb7e5a3fa6384fd47cdc4ba3" Nov 25 07:04:47 crc kubenswrapper[4482]: E1125 07:04:47.052380 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6bf74b5bc8-nqmwd_openstack(fc2d466d-9429-472d-b1a4-cccf7da7f5fc)\"" pod="openstack/heat-api-6bf74b5bc8-nqmwd" podUID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.160959 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-logs\") pod \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.161384 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-horizon-secret-key\") pod \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.161514 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-scripts\") pod \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.161541 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5fwz\" (UniqueName: \"kubernetes.io/projected/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-kube-api-access-z5fwz\") pod \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.161630 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-config-data\") pod \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.161775 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-combined-ca-bundle\") pod \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.161817 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-horizon-tls-certs\") pod \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\" (UID: \"6211c8e7-91e5-4e27-b4b8-9d8bc904f6db\") " Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.172545 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-logs" (OuterVolumeSpecName: "logs") pod "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" (UID: "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.175785 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" (UID: "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.183351 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-kube-api-access-z5fwz" (OuterVolumeSpecName: "kube-api-access-z5fwz") pod "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" (UID: "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db"). InnerVolumeSpecName "kube-api-access-z5fwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.198303 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" (UID: "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.208845 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-scripts" (OuterVolumeSpecName: "scripts") pod "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" (UID: "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.222249 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-config-data" (OuterVolumeSpecName: "config-data") pod "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" (UID: "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.239317 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" (UID: "6211c8e7-91e5-4e27-b4b8-9d8bc904f6db"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.266883 4482 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.266921 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.266933 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5fwz\" (UniqueName: \"kubernetes.io/projected/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-kube-api-access-z5fwz\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.266945 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.266956 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.266966 4482 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.266977 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.284307 4482 scope.go:117] "RemoveContainer" containerID="bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.326053 4482 scope.go:117] "RemoveContainer" containerID="b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c" Nov 25 07:04:47 crc kubenswrapper[4482]: E1125 07:04:47.328252 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c\": container with ID starting with b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c not found: ID does not exist" containerID="b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.328342 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c"} err="failed to get container status \"b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c\": rpc error: code = NotFound desc = could not find container \"b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c\": container with ID starting with b413209fdcec3cfb2c8c8ab7f1f86197105913d1fe9b1a9351cbb40552f3741c not found: ID does not exist" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.328368 4482 scope.go:117] "RemoveContainer" containerID="bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912" Nov 25 07:04:47 crc kubenswrapper[4482]: E1125 07:04:47.328713 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912\": container with ID starting with bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912 not found: ID does not exist" containerID="bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.328764 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912"} err="failed to get container status \"bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912\": rpc error: code = NotFound desc = could not find container \"bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912\": container with ID starting with bf57552a7fbbb61e7934b0e4c3f0cff69fbc4f6dd5ce6c818e2a6a4c59ffa912 not found: ID does not exist" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.376215 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fbb9df54d-nfljm"] Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.380609 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5fbb9df54d-nfljm"] Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.878067 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" path="/var/lib/kubelet/pods/6211c8e7-91e5-4e27-b4b8-9d8bc904f6db/volumes" Nov 25 07:04:47 crc kubenswrapper[4482]: I1125 07:04:47.879597 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6" path="/var/lib/kubelet/pods/abae76a1-5ff1-4fbc-a0ee-2f1edbbbb1d6/volumes" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.071350 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9be1fb02-b896-4752-93f5-df9f22a09473","Type":"ContainerStarted","Data":"84d19b91dc8728201af7d6b6e036795790baa666290efc44696c52f8fdf29b07"} Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.079254 4482 generic.go:334] "Generic (PLEG): container finished" podID="cda0ef98-7b63-4531-8655-a537323394a7" containerID="92238d549d3d1f1b3a9886cd6ca519323cb68b4f3696e31974afa748a8ab2ab7" exitCode=0 Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.079288 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cfr4t" event={"ID":"cda0ef98-7b63-4531-8655-a537323394a7","Type":"ContainerDied","Data":"92238d549d3d1f1b3a9886cd6ca519323cb68b4f3696e31974afa748a8ab2ab7"} Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.081418 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b96268f7-8545-43f1-a1d2-5fe1f00a28f9","Type":"ContainerStarted","Data":"caba1ffb88f2f38f7fcf9612c1b3bab41f103733736bf3e2fb7a01e51e32ee23"} Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.107902 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.107693 podStartE2EDuration="3.107693s" podCreationTimestamp="2025-11-25 07:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:04:48.103617888 +0000 UTC m=+1062.591849136" watchObservedRunningTime="2025-11-25 07:04:48.107693 +0000 UTC m=+1062.595924258" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.432834 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.618158 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-combined-ca-bundle\") pod \"6f1385f6-5258-4372-a20a-30a7229ec2e8\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.618578 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-db-sync-config-data\") pod \"6f1385f6-5258-4372-a20a-30a7229ec2e8\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.618744 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6x86\" (UniqueName: \"kubernetes.io/projected/6f1385f6-5258-4372-a20a-30a7229ec2e8-kube-api-access-v6x86\") pod \"6f1385f6-5258-4372-a20a-30a7229ec2e8\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.619036 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-config-data\") pod \"6f1385f6-5258-4372-a20a-30a7229ec2e8\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.619329 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f1385f6-5258-4372-a20a-30a7229ec2e8-etc-machine-id\") pod \"6f1385f6-5258-4372-a20a-30a7229ec2e8\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.619522 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-scripts\") pod \"6f1385f6-5258-4372-a20a-30a7229ec2e8\" (UID: \"6f1385f6-5258-4372-a20a-30a7229ec2e8\") " Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.619464 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f1385f6-5258-4372-a20a-30a7229ec2e8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6f1385f6-5258-4372-a20a-30a7229ec2e8" (UID: "6f1385f6-5258-4372-a20a-30a7229ec2e8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.620879 4482 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f1385f6-5258-4372-a20a-30a7229ec2e8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.624944 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-scripts" (OuterVolumeSpecName: "scripts") pod "6f1385f6-5258-4372-a20a-30a7229ec2e8" (UID: "6f1385f6-5258-4372-a20a-30a7229ec2e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.641267 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f1385f6-5258-4372-a20a-30a7229ec2e8-kube-api-access-v6x86" (OuterVolumeSpecName: "kube-api-access-v6x86") pod "6f1385f6-5258-4372-a20a-30a7229ec2e8" (UID: "6f1385f6-5258-4372-a20a-30a7229ec2e8"). InnerVolumeSpecName "kube-api-access-v6x86". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.644406 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6f1385f6-5258-4372-a20a-30a7229ec2e8" (UID: "6f1385f6-5258-4372-a20a-30a7229ec2e8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.648336 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f1385f6-5258-4372-a20a-30a7229ec2e8" (UID: "6f1385f6-5258-4372-a20a-30a7229ec2e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.675984 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-config-data" (OuterVolumeSpecName: "config-data") pod "6f1385f6-5258-4372-a20a-30a7229ec2e8" (UID: "6f1385f6-5258-4372-a20a-30a7229ec2e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.722836 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.722868 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.722878 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.722893 4482 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f1385f6-5258-4372-a20a-30a7229ec2e8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:48 crc kubenswrapper[4482]: I1125 07:04:48.722902 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6x86\" (UniqueName: \"kubernetes.io/projected/6f1385f6-5258-4372-a20a-30a7229ec2e8-kube-api-access-v6x86\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.094239 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ggvxs" event={"ID":"6f1385f6-5258-4372-a20a-30a7229ec2e8","Type":"ContainerDied","Data":"c69b601921ad69eda2a72a0c54d4da7c58c0aed6349939cd194e1e2dab3939be"} Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.094279 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c69b601921ad69eda2a72a0c54d4da7c58c0aed6349939cd194e1e2dab3939be" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.094290 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ggvxs" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.096816 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b96268f7-8545-43f1-a1d2-5fe1f00a28f9","Type":"ContainerStarted","Data":"f5ba221fd99b1818cd0a85467664f5e942b44d3aaba692c6eb0784276ae0c99a"} Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.157272 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.157253566 podStartE2EDuration="3.157253566s" podCreationTimestamp="2025-11-25 07:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:04:49.14937828 +0000 UTC m=+1063.637609539" watchObservedRunningTime="2025-11-25 07:04:49.157253566 +0000 UTC m=+1063.645484825" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.415638 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5557bd8f45-rxxpl"] Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.421569 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" podUID="f9112227-4108-4545-b5ae-d9e3a5d79faa" containerName="dnsmasq-dns" containerID="cri-o://b975e2827d4e4e4721beba49b9653b5225fb454df16327c9f65e2d2922e595d3" gracePeriod=10 Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.425777 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.435382 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:04:49 crc kubenswrapper[4482]: E1125 07:04:49.435917 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f1385f6-5258-4372-a20a-30a7229ec2e8" containerName="cinder-db-sync" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.435938 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f1385f6-5258-4372-a20a-30a7229ec2e8" containerName="cinder-db-sync" Nov 25 07:04:49 crc kubenswrapper[4482]: E1125 07:04:49.435957 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.435964 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon" Nov 25 07:04:49 crc kubenswrapper[4482]: E1125 07:04:49.435989 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon-log" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.435995 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon-log" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.436204 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f1385f6-5258-4372-a20a-30a7229ec2e8" containerName="cinder-db-sync" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.436226 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon-log" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.436236 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="6211c8e7-91e5-4e27-b4b8-9d8bc904f6db" containerName="horizon" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.437361 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.451085 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.451414 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fv2fv" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.451777 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.451991 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.486319 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.542257 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84dbcdd9df-95cth"] Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.562392 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.562496 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.562529 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.562571 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtpf9\" (UniqueName: \"kubernetes.io/projected/e5094767-b47d-4a62-9675-df093cdb0356-kube-api-access-gtpf9\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.562594 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.562643 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5094767-b47d-4a62-9675-df093cdb0356-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.563913 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.583122 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84dbcdd9df-95cth"] Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.645443 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.647136 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.655421 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.664547 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5094767-b47d-4a62-9675-df093cdb0356-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.664753 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.664830 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.664869 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.664925 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtpf9\" (UniqueName: \"kubernetes.io/projected/e5094767-b47d-4a62-9675-df093cdb0356-kube-api-access-gtpf9\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.664948 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.668508 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5094767-b47d-4a62-9675-df093cdb0356-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.669601 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.689208 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.691610 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.692531 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.694150 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.706211 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtpf9\" (UniqueName: \"kubernetes.io/projected/e5094767-b47d-4a62-9675-df093cdb0356-kube-api-access-gtpf9\") pod \"cinder-scheduler-0\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.769631 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.777675 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01a92cb-30ad-406c-96b5-5ee6a610cd69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.777741 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-ovsdbserver-sb\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.777814 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-config\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.777832 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-config-data-custom\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.777850 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-ovsdbserver-nb\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.777876 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-scripts\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.777898 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e01a92cb-30ad-406c-96b5-5ee6a610cd69-logs\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.777916 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bhjk\" (UniqueName: \"kubernetes.io/projected/e01a92cb-30ad-406c-96b5-5ee6a610cd69-kube-api-access-5bhjk\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.777934 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.777956 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-dns-swift-storage-0\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.778018 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h88tt\" (UniqueName: \"kubernetes.io/projected/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-kube-api-access-h88tt\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.778038 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-config-data\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.778062 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-dns-svc\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.833088 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.880400 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srcz5\" (UniqueName: \"kubernetes.io/projected/cda0ef98-7b63-4531-8655-a537323394a7-kube-api-access-srcz5\") pod \"cda0ef98-7b63-4531-8655-a537323394a7\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.880459 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-combined-ca-bundle\") pod \"cda0ef98-7b63-4531-8655-a537323394a7\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.880518 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-scripts\") pod \"cda0ef98-7b63-4531-8655-a537323394a7\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.880652 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-config-data\") pod \"cda0ef98-7b63-4531-8655-a537323394a7\" (UID: \"cda0ef98-7b63-4531-8655-a537323394a7\") " Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.880939 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e01a92cb-30ad-406c-96b5-5ee6a610cd69-logs\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.880961 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bhjk\" (UniqueName: \"kubernetes.io/projected/e01a92cb-30ad-406c-96b5-5ee6a610cd69-kube-api-access-5bhjk\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.880984 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.881014 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-dns-swift-storage-0\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.881068 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h88tt\" (UniqueName: \"kubernetes.io/projected/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-kube-api-access-h88tt\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.881088 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-config-data\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.881115 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-dns-svc\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.884155 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-dns-swift-storage-0\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.884687 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-dns-svc\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.884935 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e01a92cb-30ad-406c-96b5-5ee6a610cd69-logs\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.907027 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.907819 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-scripts" (OuterVolumeSpecName: "scripts") pod "cda0ef98-7b63-4531-8655-a537323394a7" (UID: "cda0ef98-7b63-4531-8655-a537323394a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.908839 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-config-data\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.909053 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda0ef98-7b63-4531-8655-a537323394a7-kube-api-access-srcz5" (OuterVolumeSpecName: "kube-api-access-srcz5") pod "cda0ef98-7b63-4531-8655-a537323394a7" (UID: "cda0ef98-7b63-4531-8655-a537323394a7"). InnerVolumeSpecName "kube-api-access-srcz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.909857 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bhjk\" (UniqueName: \"kubernetes.io/projected/e01a92cb-30ad-406c-96b5-5ee6a610cd69-kube-api-access-5bhjk\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.917513 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01a92cb-30ad-406c-96b5-5ee6a610cd69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.917592 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-ovsdbserver-sb\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.917754 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-config\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.917776 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-config-data-custom\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.917794 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-ovsdbserver-nb\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.917832 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-scripts\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.917912 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srcz5\" (UniqueName: \"kubernetes.io/projected/cda0ef98-7b63-4531-8655-a537323394a7-kube-api-access-srcz5\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.917921 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.918951 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-config\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.919008 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01a92cb-30ad-406c-96b5-5ee6a610cd69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.919567 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-ovsdbserver-sb\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.919795 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-ovsdbserver-nb\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.928399 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-scripts\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.940137 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h88tt\" (UniqueName: \"kubernetes.io/projected/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-kube-api-access-h88tt\") pod \"dnsmasq-dns-84dbcdd9df-95cth\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.941066 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-config-data-custom\") pod \"cinder-api-0\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " pod="openstack/cinder-api-0" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.956941 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-config-data" (OuterVolumeSpecName: "config-data") pod "cda0ef98-7b63-4531-8655-a537323394a7" (UID: "cda0ef98-7b63-4531-8655-a537323394a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:49 crc kubenswrapper[4482]: I1125 07:04:49.982923 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cda0ef98-7b63-4531-8655-a537323394a7" (UID: "cda0ef98-7b63-4531-8655-a537323394a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.019729 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.019752 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda0ef98-7b63-4531-8655-a537323394a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.069596 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.121070 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.237360 4482 generic.go:334] "Generic (PLEG): container finished" podID="f9112227-4108-4545-b5ae-d9e3a5d79faa" containerID="b975e2827d4e4e4721beba49b9653b5225fb454df16327c9f65e2d2922e595d3" exitCode=0 Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.237676 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" event={"ID":"f9112227-4108-4545-b5ae-d9e3a5d79faa","Type":"ContainerDied","Data":"b975e2827d4e4e4721beba49b9653b5225fb454df16327c9f65e2d2922e595d3"} Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.287403 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cfr4t" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.288659 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cfr4t" event={"ID":"cda0ef98-7b63-4531-8655-a537323394a7","Type":"ContainerDied","Data":"d6e3a1f065cda323b649f532781cba6ed4f370e75e9b0319e9e4d87617a6c8fd"} Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.288701 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6e3a1f065cda323b649f532781cba6ed4f370e75e9b0319e9e4d87617a6c8fd" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.358337 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.485860 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 07:04:50 crc kubenswrapper[4482]: E1125 07:04:50.486291 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda0ef98-7b63-4531-8655-a537323394a7" containerName="nova-cell0-conductor-db-sync" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.486309 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda0ef98-7b63-4531-8655-a537323394a7" containerName="nova-cell0-conductor-db-sync" Nov 25 07:04:50 crc kubenswrapper[4482]: E1125 07:04:50.486321 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9112227-4108-4545-b5ae-d9e3a5d79faa" containerName="dnsmasq-dns" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.486327 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9112227-4108-4545-b5ae-d9e3a5d79faa" containerName="dnsmasq-dns" Nov 25 07:04:50 crc kubenswrapper[4482]: E1125 07:04:50.486360 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9112227-4108-4545-b5ae-d9e3a5d79faa" containerName="init" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.486368 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9112227-4108-4545-b5ae-d9e3a5d79faa" containerName="init" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.517479 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda0ef98-7b63-4531-8655-a537323394a7" containerName="nova-cell0-conductor-db-sync" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.517540 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9112227-4108-4545-b5ae-d9e3a5d79faa" containerName="dnsmasq-dns" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.518190 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.518274 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.522457 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.522696 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2s7cr" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.562146 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-config\") pod \"f9112227-4108-4545-b5ae-d9e3a5d79faa\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.562266 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-ovsdbserver-nb\") pod \"f9112227-4108-4545-b5ae-d9e3a5d79faa\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.562304 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-dns-swift-storage-0\") pod \"f9112227-4108-4545-b5ae-d9e3a5d79faa\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.562407 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-ovsdbserver-sb\") pod \"f9112227-4108-4545-b5ae-d9e3a5d79faa\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.562431 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrccq\" (UniqueName: \"kubernetes.io/projected/f9112227-4108-4545-b5ae-d9e3a5d79faa-kube-api-access-lrccq\") pod \"f9112227-4108-4545-b5ae-d9e3a5d79faa\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.562643 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-dns-svc\") pod \"f9112227-4108-4545-b5ae-d9e3a5d79faa\" (UID: \"f9112227-4108-4545-b5ae-d9e3a5d79faa\") " Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.572495 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dde6054d-7b3c-41ca-a16d-34693953644f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dde6054d-7b3c-41ca-a16d-34693953644f\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.572836 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dde6054d-7b3c-41ca-a16d-34693953644f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dde6054d-7b3c-41ca-a16d-34693953644f\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.572945 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45bmk\" (UniqueName: \"kubernetes.io/projected/dde6054d-7b3c-41ca-a16d-34693953644f-kube-api-access-45bmk\") pod \"nova-cell0-conductor-0\" (UID: \"dde6054d-7b3c-41ca-a16d-34693953644f\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.613217 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9112227-4108-4545-b5ae-d9e3a5d79faa-kube-api-access-lrccq" (OuterVolumeSpecName: "kube-api-access-lrccq") pod "f9112227-4108-4545-b5ae-d9e3a5d79faa" (UID: "f9112227-4108-4545-b5ae-d9e3a5d79faa"). InnerVolumeSpecName "kube-api-access-lrccq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.673792 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9112227-4108-4545-b5ae-d9e3a5d79faa" (UID: "f9112227-4108-4545-b5ae-d9e3a5d79faa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.675290 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45bmk\" (UniqueName: \"kubernetes.io/projected/dde6054d-7b3c-41ca-a16d-34693953644f-kube-api-access-45bmk\") pod \"nova-cell0-conductor-0\" (UID: \"dde6054d-7b3c-41ca-a16d-34693953644f\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.676538 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dde6054d-7b3c-41ca-a16d-34693953644f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dde6054d-7b3c-41ca-a16d-34693953644f\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.676602 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dde6054d-7b3c-41ca-a16d-34693953644f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dde6054d-7b3c-41ca-a16d-34693953644f\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.676695 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrccq\" (UniqueName: \"kubernetes.io/projected/f9112227-4108-4545-b5ae-d9e3a5d79faa-kube-api-access-lrccq\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.676719 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.692978 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dde6054d-7b3c-41ca-a16d-34693953644f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dde6054d-7b3c-41ca-a16d-34693953644f\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.717422 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f9112227-4108-4545-b5ae-d9e3a5d79faa" (UID: "f9112227-4108-4545-b5ae-d9e3a5d79faa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.737248 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dde6054d-7b3c-41ca-a16d-34693953644f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dde6054d-7b3c-41ca-a16d-34693953644f\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.741927 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:04:50 crc kubenswrapper[4482]: W1125 07:04:50.746281 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5094767_b47d_4a62_9675_df093cdb0356.slice/crio-c818c497b128e64eb35632daf7bea4ec3970d713dce9bc11c82219d8763798d8 WatchSource:0}: Error finding container c818c497b128e64eb35632daf7bea4ec3970d713dce9bc11c82219d8763798d8: Status 404 returned error can't find the container with id c818c497b128e64eb35632daf7bea4ec3970d713dce9bc11c82219d8763798d8 Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.759923 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f9112227-4108-4545-b5ae-d9e3a5d79faa" (UID: "f9112227-4108-4545-b5ae-d9e3a5d79faa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.761757 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45bmk\" (UniqueName: \"kubernetes.io/projected/dde6054d-7b3c-41ca-a16d-34693953644f-kube-api-access-45bmk\") pod \"nova-cell0-conductor-0\" (UID: \"dde6054d-7b3c-41ca-a16d-34693953644f\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.781231 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.781246 4482 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.805811 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f9112227-4108-4545-b5ae-d9e3a5d79faa" (UID: "f9112227-4108-4545-b5ae-d9e3a5d79faa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.863246 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-config" (OuterVolumeSpecName: "config") pod "f9112227-4108-4545-b5ae-d9e3a5d79faa" (UID: "f9112227-4108-4545-b5ae-d9e3a5d79faa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.867606 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.885393 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:50 crc kubenswrapper[4482]: I1125 07:04:50.885423 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9112227-4108-4545-b5ae-d9e3a5d79faa-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.134943 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.244520 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 07:04:51 crc kubenswrapper[4482]: W1125 07:04:51.276255 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddde6054d_7b3c_41ca_a16d_34693953644f.slice/crio-6bd859e8098c5d69f36d76fdd793a813fe0caf522df1bdff50d6bb8c58f6631b WatchSource:0}: Error finding container 6bd859e8098c5d69f36d76fdd793a813fe0caf522df1bdff50d6bb8c58f6631b: Status 404 returned error can't find the container with id 6bd859e8098c5d69f36d76fdd793a813fe0caf522df1bdff50d6bb8c58f6631b Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.320483 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84dbcdd9df-95cth"] Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.335374 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5094767-b47d-4a62-9675-df093cdb0356","Type":"ContainerStarted","Data":"c818c497b128e64eb35632daf7bea4ec3970d713dce9bc11c82219d8763798d8"} Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.352024 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dde6054d-7b3c-41ca-a16d-34693953644f","Type":"ContainerStarted","Data":"6bd859e8098c5d69f36d76fdd793a813fe0caf522df1bdff50d6bb8c58f6631b"} Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.391379 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e01a92cb-30ad-406c-96b5-5ee6a610cd69","Type":"ContainerStarted","Data":"0de8805fac615e3a614db682c8438c720fa1b4f99fc09b6a4433c95204b7f752"} Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.443253 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.443393 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5557bd8f45-rxxpl" event={"ID":"f9112227-4108-4545-b5ae-d9e3a5d79faa","Type":"ContainerDied","Data":"bf139662224a9ddec6267f73b289879adc9d85c3f2c22b0f1ca82ac86f8f8201"} Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.447599 4482 scope.go:117] "RemoveContainer" containerID="b975e2827d4e4e4721beba49b9653b5225fb454df16327c9f65e2d2922e595d3" Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.504727 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5557bd8f45-rxxpl"] Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.509826 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5557bd8f45-rxxpl"] Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.537901 4482 scope.go:117] "RemoveContainer" containerID="56ec6c37c289987e62077f7231b6789860e37b8e710778c34d4471b6f052fc24" Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.595803 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-94697d564-bgxtg" Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.652034 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6fccbbd848-gp8qx"] Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.652276 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6fccbbd848-gp8qx" podUID="5bda1dfd-9f8b-4fbd-8093-689b7afada79" containerName="heat-engine" containerID="cri-o://08d1da05c3910796afa7506712e18f571090b9d1e1d10ddfdc0f55109287b8c3" gracePeriod=60 Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.697512 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.756235 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-55c7dc97f5-ffnl6" Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.905344 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9112227-4108-4545-b5ae-d9e3a5d79faa" path="/var/lib/kubelet/pods/f9112227-4108-4545-b5ae-d9e3a5d79faa/volumes" Nov 25 07:04:51 crc kubenswrapper[4482]: I1125 07:04:51.908539 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6bf74b5bc8-nqmwd"] Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.308117 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-b57c4d7bd-prkv2" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.361936 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.431679 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.438754 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-8549f976cf-6szl5"] Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.508782 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-config-data-custom\") pod \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.509042 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt7nr\" (UniqueName: \"kubernetes.io/projected/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-kube-api-access-pt7nr\") pod \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.509119 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-config-data\") pod \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.509257 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-combined-ca-bundle\") pod \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\" (UID: \"fc2d466d-9429-472d-b1a4-cccf7da7f5fc\") " Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.518361 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fc2d466d-9429-472d-b1a4-cccf7da7f5fc" (UID: "fc2d466d-9429-472d-b1a4-cccf7da7f5fc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.531187 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-kube-api-access-pt7nr" (OuterVolumeSpecName: "kube-api-access-pt7nr") pod "fc2d466d-9429-472d-b1a4-cccf7da7f5fc" (UID: "fc2d466d-9429-472d-b1a4-cccf7da7f5fc"). InnerVolumeSpecName "kube-api-access-pt7nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.540358 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dde6054d-7b3c-41ca-a16d-34693953644f","Type":"ContainerStarted","Data":"48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427"} Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.541638 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.579340 4482 generic.go:334] "Generic (PLEG): container finished" podID="9189dc29-1a63-4e21-b4c6-066c86c6a7ab" containerID="d814388c59fd2296da5b79d661f2fb91c99baac0ddccdce6ea7519f22c8fa728" exitCode=0 Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.579415 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" event={"ID":"9189dc29-1a63-4e21-b4c6-066c86c6a7ab","Type":"ContainerDied","Data":"d814388c59fd2296da5b79d661f2fb91c99baac0ddccdce6ea7519f22c8fa728"} Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.579437 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" event={"ID":"9189dc29-1a63-4e21-b4c6-066c86c6a7ab","Type":"ContainerStarted","Data":"87c17de301e6b65a27895eb4ff840273e3789f6d59321b45fc27a110aff16185"} Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.619739 4482 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.619898 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt7nr\" (UniqueName: \"kubernetes.io/projected/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-kube-api-access-pt7nr\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.622231 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.622210702 podStartE2EDuration="2.622210702s" podCreationTimestamp="2025-11-25 07:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:04:52.567337116 +0000 UTC m=+1067.055568375" watchObservedRunningTime="2025-11-25 07:04:52.622210702 +0000 UTC m=+1067.110441961" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.636097 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc2d466d-9429-472d-b1a4-cccf7da7f5fc" (UID: "fc2d466d-9429-472d-b1a4-cccf7da7f5fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.637154 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6bf74b5bc8-nqmwd" event={"ID":"fc2d466d-9429-472d-b1a4-cccf7da7f5fc","Type":"ContainerDied","Data":"b0c44faceaf7ad098be394ae25ab68db63939fc3b7c94f4b762a5f92b7c8dbf8"} Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.637233 4482 scope.go:117] "RemoveContainer" containerID="13b61690e842970ca6ad1e39bc48fb05fc884f74fb7e5a3fa6384fd47cdc4ba3" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.637337 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6bf74b5bc8-nqmwd" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.674157 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-config-data" (OuterVolumeSpecName: "config-data") pod "fc2d466d-9429-472d-b1a4-cccf7da7f5fc" (UID: "fc2d466d-9429-472d-b1a4-cccf7da7f5fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.723661 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.723878 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2d466d-9429-472d-b1a4-cccf7da7f5fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:52 crc kubenswrapper[4482]: I1125 07:04:52.943589 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.012212 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6bf74b5bc8-nqmwd"] Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.037887 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-combined-ca-bundle\") pod \"5c662f2a-8694-4f15-8e15-edadbbdaa093\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.037942 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-config-data-custom\") pod \"5c662f2a-8694-4f15-8e15-edadbbdaa093\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.038037 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgstp\" (UniqueName: \"kubernetes.io/projected/5c662f2a-8694-4f15-8e15-edadbbdaa093-kube-api-access-qgstp\") pod \"5c662f2a-8694-4f15-8e15-edadbbdaa093\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.038133 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-config-data\") pod \"5c662f2a-8694-4f15-8e15-edadbbdaa093\" (UID: \"5c662f2a-8694-4f15-8e15-edadbbdaa093\") " Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.041560 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6bf74b5bc8-nqmwd"] Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.045110 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c662f2a-8694-4f15-8e15-edadbbdaa093-kube-api-access-qgstp" (OuterVolumeSpecName: "kube-api-access-qgstp") pod "5c662f2a-8694-4f15-8e15-edadbbdaa093" (UID: "5c662f2a-8694-4f15-8e15-edadbbdaa093"). InnerVolumeSpecName "kube-api-access-qgstp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.063328 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5c662f2a-8694-4f15-8e15-edadbbdaa093" (UID: "5c662f2a-8694-4f15-8e15-edadbbdaa093"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.147014 4482 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.147046 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgstp\" (UniqueName: \"kubernetes.io/projected/5c662f2a-8694-4f15-8e15-edadbbdaa093-kube-api-access-qgstp\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.161315 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c662f2a-8694-4f15-8e15-edadbbdaa093" (UID: "5c662f2a-8694-4f15-8e15-edadbbdaa093"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.248805 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.263320 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-config-data" (OuterVolumeSpecName: "config-data") pod "5c662f2a-8694-4f15-8e15-edadbbdaa093" (UID: "5c662f2a-8694-4f15-8e15-edadbbdaa093"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.335062 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.350670 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c662f2a-8694-4f15-8e15-edadbbdaa093-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.672761 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" event={"ID":"9189dc29-1a63-4e21-b4c6-066c86c6a7ab","Type":"ContainerStarted","Data":"2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2"} Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.673834 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.679399 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8549f976cf-6szl5" event={"ID":"5c662f2a-8694-4f15-8e15-edadbbdaa093","Type":"ContainerDied","Data":"688f48748edfb644ba06c27632e25127f57ceec85bb3461c2856a6489f3930b0"} Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.679436 4482 scope.go:117] "RemoveContainer" containerID="3ddc44c8f4e7d1ddead2b846947f816c3c7b220ffbb1e68a889ee516741bddbd" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.679503 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8549f976cf-6szl5" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.695466 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5094767-b47d-4a62-9675-df093cdb0356","Type":"ContainerStarted","Data":"b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e"} Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.699449 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" podStartSLOduration=4.699438195 podStartE2EDuration="4.699438195s" podCreationTimestamp="2025-11-25 07:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:04:53.695050704 +0000 UTC m=+1068.183281963" watchObservedRunningTime="2025-11-25 07:04:53.699438195 +0000 UTC m=+1068.187669453" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.707263 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e01a92cb-30ad-406c-96b5-5ee6a610cd69","Type":"ContainerStarted","Data":"c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2"} Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.719759 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-8549f976cf-6szl5"] Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.742549 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-8549f976cf-6szl5"] Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.851340 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c662f2a-8694-4f15-8e15-edadbbdaa093" path="/var/lib/kubelet/pods/5c662f2a-8694-4f15-8e15-edadbbdaa093/volumes" Nov 25 07:04:53 crc kubenswrapper[4482]: I1125 07:04:53.852064 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" path="/var/lib/kubelet/pods/fc2d466d-9429-472d-b1a4-cccf7da7f5fc/volumes" Nov 25 07:04:54 crc kubenswrapper[4482]: E1125 07:04:54.737318 4482 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="08d1da05c3910796afa7506712e18f571090b9d1e1d10ddfdc0f55109287b8c3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 07:04:54 crc kubenswrapper[4482]: E1125 07:04:54.750615 4482 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="08d1da05c3910796afa7506712e18f571090b9d1e1d10ddfdc0f55109287b8c3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 07:04:54 crc kubenswrapper[4482]: I1125 07:04:54.759470 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5094767-b47d-4a62-9675-df093cdb0356","Type":"ContainerStarted","Data":"e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79"} Nov 25 07:04:54 crc kubenswrapper[4482]: I1125 07:04:54.771584 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 07:04:54 crc kubenswrapper[4482]: E1125 07:04:54.781130 4482 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="08d1da05c3910796afa7506712e18f571090b9d1e1d10ddfdc0f55109287b8c3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 07:04:54 crc kubenswrapper[4482]: E1125 07:04:54.788470 4482 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6fccbbd848-gp8qx" podUID="5bda1dfd-9f8b-4fbd-8093-689b7afada79" containerName="heat-engine" Nov 25 07:04:54 crc kubenswrapper[4482]: I1125 07:04:54.789333 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e01a92cb-30ad-406c-96b5-5ee6a610cd69","Type":"ContainerStarted","Data":"496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b"} Nov 25 07:04:54 crc kubenswrapper[4482]: I1125 07:04:54.789503 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e01a92cb-30ad-406c-96b5-5ee6a610cd69" containerName="cinder-api-log" containerID="cri-o://c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2" gracePeriod=30 Nov 25 07:04:54 crc kubenswrapper[4482]: I1125 07:04:54.789616 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e01a92cb-30ad-406c-96b5-5ee6a610cd69" containerName="cinder-api" containerID="cri-o://496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b" gracePeriod=30 Nov 25 07:04:54 crc kubenswrapper[4482]: I1125 07:04:54.789710 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 07:04:54 crc kubenswrapper[4482]: I1125 07:04:54.808518 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.343411679 podStartE2EDuration="5.808494589s" podCreationTimestamp="2025-11-25 07:04:49 +0000 UTC" firstStartedPulling="2025-11-25 07:04:50.779426616 +0000 UTC m=+1065.267657875" lastFinishedPulling="2025-11-25 07:04:52.244509526 +0000 UTC m=+1066.732740785" observedRunningTime="2025-11-25 07:04:54.782201423 +0000 UTC m=+1069.270432671" watchObservedRunningTime="2025-11-25 07:04:54.808494589 +0000 UTC m=+1069.296725838" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.671712 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.706311 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-config-data-custom\") pod \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.706832 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-scripts\") pod \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.706935 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bhjk\" (UniqueName: \"kubernetes.io/projected/e01a92cb-30ad-406c-96b5-5ee6a610cd69-kube-api-access-5bhjk\") pod \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.707018 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01a92cb-30ad-406c-96b5-5ee6a610cd69-etc-machine-id\") pod \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.707112 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-config-data\") pod \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.707202 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e01a92cb-30ad-406c-96b5-5ee6a610cd69-logs\") pod \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.707290 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-combined-ca-bundle\") pod \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\" (UID: \"e01a92cb-30ad-406c-96b5-5ee6a610cd69\") " Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.720797 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01a92cb-30ad-406c-96b5-5ee6a610cd69-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e01a92cb-30ad-406c-96b5-5ee6a610cd69" (UID: "e01a92cb-30ad-406c-96b5-5ee6a610cd69"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.722297 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e01a92cb-30ad-406c-96b5-5ee6a610cd69-kube-api-access-5bhjk" (OuterVolumeSpecName: "kube-api-access-5bhjk") pod "e01a92cb-30ad-406c-96b5-5ee6a610cd69" (UID: "e01a92cb-30ad-406c-96b5-5ee6a610cd69"). InnerVolumeSpecName "kube-api-access-5bhjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.723245 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e01a92cb-30ad-406c-96b5-5ee6a610cd69-logs" (OuterVolumeSpecName: "logs") pod "e01a92cb-30ad-406c-96b5-5ee6a610cd69" (UID: "e01a92cb-30ad-406c-96b5-5ee6a610cd69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.723329 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e01a92cb-30ad-406c-96b5-5ee6a610cd69" (UID: "e01a92cb-30ad-406c-96b5-5ee6a610cd69"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.731822 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-scripts" (OuterVolumeSpecName: "scripts") pod "e01a92cb-30ad-406c-96b5-5ee6a610cd69" (UID: "e01a92cb-30ad-406c-96b5-5ee6a610cd69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.743398 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.744565 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.758598 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e01a92cb-30ad-406c-96b5-5ee6a610cd69" (UID: "e01a92cb-30ad-406c-96b5-5ee6a610cd69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.811355 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.811391 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bhjk\" (UniqueName: \"kubernetes.io/projected/e01a92cb-30ad-406c-96b5-5ee6a610cd69-kube-api-access-5bhjk\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.811406 4482 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e01a92cb-30ad-406c-96b5-5ee6a610cd69-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.811419 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e01a92cb-30ad-406c-96b5-5ee6a610cd69-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.811427 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.811436 4482 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.815476 4482 generic.go:334] "Generic (PLEG): container finished" podID="e01a92cb-30ad-406c-96b5-5ee6a610cd69" containerID="496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b" exitCode=0 Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.815572 4482 generic.go:334] "Generic (PLEG): container finished" podID="e01a92cb-30ad-406c-96b5-5ee6a610cd69" containerID="c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2" exitCode=143 Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.816054 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.816746 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e01a92cb-30ad-406c-96b5-5ee6a610cd69","Type":"ContainerDied","Data":"496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b"} Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.816862 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e01a92cb-30ad-406c-96b5-5ee6a610cd69","Type":"ContainerDied","Data":"c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2"} Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.816931 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e01a92cb-30ad-406c-96b5-5ee6a610cd69","Type":"ContainerDied","Data":"0de8805fac615e3a614db682c8438c720fa1b4f99fc09b6a4433c95204b7f752"} Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.817002 4482 scope.go:117] "RemoveContainer" containerID="496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.822277 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-config-data" (OuterVolumeSpecName: "config-data") pod "e01a92cb-30ad-406c-96b5-5ee6a610cd69" (UID: "e01a92cb-30ad-406c-96b5-5ee6a610cd69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.824459 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.826857 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.883923 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.894239 4482 scope.go:117] "RemoveContainer" containerID="c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.916825 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01a92cb-30ad-406c-96b5-5ee6a610cd69-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.939267 4482 scope.go:117] "RemoveContainer" containerID="496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b" Nov 25 07:04:55 crc kubenswrapper[4482]: E1125 07:04:55.939668 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b\": container with ID starting with 496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b not found: ID does not exist" containerID="496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.939698 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b"} err="failed to get container status \"496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b\": rpc error: code = NotFound desc = could not find container \"496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b\": container with ID starting with 496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b not found: ID does not exist" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.939727 4482 scope.go:117] "RemoveContainer" containerID="c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2" Nov 25 07:04:55 crc kubenswrapper[4482]: E1125 07:04:55.939901 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2\": container with ID starting with c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2 not found: ID does not exist" containerID="c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.939926 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2"} err="failed to get container status \"c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2\": rpc error: code = NotFound desc = could not find container \"c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2\": container with ID starting with c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2 not found: ID does not exist" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.939938 4482 scope.go:117] "RemoveContainer" containerID="496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.940108 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b"} err="failed to get container status \"496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b\": rpc error: code = NotFound desc = could not find container \"496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b\": container with ID starting with 496f7ca7b96027a99d5b2554675d97c83add9d4e4c61e32f5de5509c2e0c7d3b not found: ID does not exist" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.940131 4482 scope.go:117] "RemoveContainer" containerID="c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2" Nov 25 07:04:55 crc kubenswrapper[4482]: I1125 07:04:55.940565 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2"} err="failed to get container status \"c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2\": rpc error: code = NotFound desc = could not find container \"c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2\": container with ID starting with c03255e42adfac1ac402d0953803bd227a52aaf1c29377731f0435e83aea5db2 not found: ID does not exist" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.146196 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.171576 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.182264 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:04:56 crc kubenswrapper[4482]: E1125 07:04:56.182645 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01a92cb-30ad-406c-96b5-5ee6a610cd69" containerName="cinder-api" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.182663 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01a92cb-30ad-406c-96b5-5ee6a610cd69" containerName="cinder-api" Nov 25 07:04:56 crc kubenswrapper[4482]: E1125 07:04:56.182683 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c662f2a-8694-4f15-8e15-edadbbdaa093" containerName="heat-cfnapi" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.182690 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c662f2a-8694-4f15-8e15-edadbbdaa093" containerName="heat-cfnapi" Nov 25 07:04:56 crc kubenswrapper[4482]: E1125 07:04:56.182701 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" containerName="heat-api" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.182708 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" containerName="heat-api" Nov 25 07:04:56 crc kubenswrapper[4482]: E1125 07:04:56.182729 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01a92cb-30ad-406c-96b5-5ee6a610cd69" containerName="cinder-api-log" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.182735 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01a92cb-30ad-406c-96b5-5ee6a610cd69" containerName="cinder-api-log" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.182913 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01a92cb-30ad-406c-96b5-5ee6a610cd69" containerName="cinder-api-log" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.182927 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c662f2a-8694-4f15-8e15-edadbbdaa093" containerName="heat-cfnapi" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.182940 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" containerName="heat-api" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.182947 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c662f2a-8694-4f15-8e15-edadbbdaa093" containerName="heat-cfnapi" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.182961 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01a92cb-30ad-406c-96b5-5ee6a610cd69" containerName="cinder-api" Nov 25 07:04:56 crc kubenswrapper[4482]: E1125 07:04:56.183151 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c662f2a-8694-4f15-8e15-edadbbdaa093" containerName="heat-cfnapi" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.183164 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c662f2a-8694-4f15-8e15-edadbbdaa093" containerName="heat-cfnapi" Nov 25 07:04:56 crc kubenswrapper[4482]: E1125 07:04:56.183182 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" containerName="heat-api" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.183188 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" containerName="heat-api" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.183356 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2d466d-9429-472d-b1a4-cccf7da7f5fc" containerName="heat-api" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.184058 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.186503 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.186699 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.192387 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.202500 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.328911 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.329317 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-config-data-custom\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.329390 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-scripts\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.329422 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0b90977-9c0c-4191-b454-b61ee871d3ba-logs\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.329442 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.329554 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccdsv\" (UniqueName: \"kubernetes.io/projected/d0b90977-9c0c-4191-b454-b61ee871d3ba-kube-api-access-ccdsv\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.329585 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0b90977-9c0c-4191-b454-b61ee871d3ba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.329607 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-config-data\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.329647 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.420042 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.420114 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.442423 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccdsv\" (UniqueName: \"kubernetes.io/projected/d0b90977-9c0c-4191-b454-b61ee871d3ba-kube-api-access-ccdsv\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.442470 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0b90977-9c0c-4191-b454-b61ee871d3ba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.442497 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-config-data\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.442550 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.442664 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.442693 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-config-data-custom\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.442729 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-scripts\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.442768 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0b90977-9c0c-4191-b454-b61ee871d3ba-logs\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.442783 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.443312 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0b90977-9c0c-4191-b454-b61ee871d3ba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.445574 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0b90977-9c0c-4191-b454-b61ee871d3ba-logs\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.453738 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.454267 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-config-data-custom\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.454465 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-scripts\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.459850 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-config-data\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.460694 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.469588 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0b90977-9c0c-4191-b454-b61ee871d3ba-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.472586 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccdsv\" (UniqueName: \"kubernetes.io/projected/d0b90977-9c0c-4191-b454-b61ee871d3ba-kube-api-access-ccdsv\") pod \"cinder-api-0\" (UID: \"d0b90977-9c0c-4191-b454-b61ee871d3ba\") " pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.529357 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.535754 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.550476 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.827281 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.827547 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:56 crc kubenswrapper[4482]: I1125 07:04:56.827561 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 07:04:57 crc kubenswrapper[4482]: W1125 07:04:57.063112 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0b90977_9c0c_4191_b454_b61ee871d3ba.slice/crio-ffc06dbee3dceb9b57dbbb41cce42f3657307346969077c917fbcd8776cb5637 WatchSource:0}: Error finding container ffc06dbee3dceb9b57dbbb41cce42f3657307346969077c917fbcd8776cb5637: Status 404 returned error can't find the container with id ffc06dbee3dceb9b57dbbb41cce42f3657307346969077c917fbcd8776cb5637 Nov 25 07:04:57 crc kubenswrapper[4482]: I1125 07:04:57.063199 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:04:57 crc kubenswrapper[4482]: I1125 07:04:57.855528 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e01a92cb-30ad-406c-96b5-5ee6a610cd69" path="/var/lib/kubelet/pods/e01a92cb-30ad-406c-96b5-5ee6a610cd69/volumes" Nov 25 07:04:57 crc kubenswrapper[4482]: I1125 07:04:57.856360 4482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 07:04:57 crc kubenswrapper[4482]: I1125 07:04:57.862132 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d0b90977-9c0c-4191-b454-b61ee871d3ba","Type":"ContainerStarted","Data":"e6dc1e66cc216d10635739c449cb2395ec564fa068a4c81720d356b6bc6ba1b5"} Nov 25 07:04:57 crc kubenswrapper[4482]: I1125 07:04:57.862192 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d0b90977-9c0c-4191-b454-b61ee871d3ba","Type":"ContainerStarted","Data":"ffc06dbee3dceb9b57dbbb41cce42f3657307346969077c917fbcd8776cb5637"} Nov 25 07:04:58 crc kubenswrapper[4482]: I1125 07:04:58.871388 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d0b90977-9c0c-4191-b454-b61ee871d3ba","Type":"ContainerStarted","Data":"bacac0348b159d8d16fe6ae2276d860dde120b41c172e4cf82f438566f2e6ed8"} Nov 25 07:04:58 crc kubenswrapper[4482]: I1125 07:04:58.871914 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 07:04:58 crc kubenswrapper[4482]: I1125 07:04:58.903633 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.903589443 podStartE2EDuration="2.903589443s" podCreationTimestamp="2025-11-25 07:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:04:58.894523112 +0000 UTC m=+1073.382754371" watchObservedRunningTime="2025-11-25 07:04:58.903589443 +0000 UTC m=+1073.391820712" Nov 25 07:04:59 crc kubenswrapper[4482]: I1125 07:04:59.157442 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:59 crc kubenswrapper[4482]: I1125 07:04:59.157563 4482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 07:04:59 crc kubenswrapper[4482]: I1125 07:04:59.161841 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 07:04:59 crc kubenswrapper[4482]: I1125 07:04:59.161958 4482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 07:04:59 crc kubenswrapper[4482]: I1125 07:04:59.165552 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 07:04:59 crc kubenswrapper[4482]: I1125 07:04:59.223351 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 07:04:59 crc kubenswrapper[4482]: I1125 07:04:59.988703 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.023164 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.073356 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.217624 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58d8d55fc5-62wcn"] Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.230080 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" podUID="22ad88d8-cb8a-4137-b2d6-f8e787a1526b" containerName="dnsmasq-dns" containerID="cri-o://1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3" gracePeriod=10 Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.832533 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.880672 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-dns-svc\") pod \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.880714 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlbks\" (UniqueName: \"kubernetes.io/projected/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-kube-api-access-jlbks\") pod \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.880753 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-config\") pod \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.880813 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-ovsdbserver-sb\") pod \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.880840 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-dns-swift-storage-0\") pod \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.880890 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-ovsdbserver-nb\") pod \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\" (UID: \"22ad88d8-cb8a-4137-b2d6-f8e787a1526b\") " Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.913090 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-kube-api-access-jlbks" (OuterVolumeSpecName: "kube-api-access-jlbks") pod "22ad88d8-cb8a-4137-b2d6-f8e787a1526b" (UID: "22ad88d8-cb8a-4137-b2d6-f8e787a1526b"). InnerVolumeSpecName "kube-api-access-jlbks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.940089 4482 generic.go:334] "Generic (PLEG): container finished" podID="22ad88d8-cb8a-4137-b2d6-f8e787a1526b" containerID="1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3" exitCode=0 Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.940335 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e5094767-b47d-4a62-9675-df093cdb0356" containerName="cinder-scheduler" containerID="cri-o://b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e" gracePeriod=30 Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.940648 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.941231 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" event={"ID":"22ad88d8-cb8a-4137-b2d6-f8e787a1526b","Type":"ContainerDied","Data":"1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3"} Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.941261 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8d55fc5-62wcn" event={"ID":"22ad88d8-cb8a-4137-b2d6-f8e787a1526b","Type":"ContainerDied","Data":"709ca5739c0d46ea26a93c7a09a3403583a095d08d3650d31d8572d24427c714"} Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.941278 4482 scope.go:117] "RemoveContainer" containerID="1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3" Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.941607 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e5094767-b47d-4a62-9675-df093cdb0356" containerName="probe" containerID="cri-o://e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79" gracePeriod=30 Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.945820 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.948445 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-config" (OuterVolumeSpecName: "config") pod "22ad88d8-cb8a-4137-b2d6-f8e787a1526b" (UID: "22ad88d8-cb8a-4137-b2d6-f8e787a1526b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.983351 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlbks\" (UniqueName: \"kubernetes.io/projected/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-kube-api-access-jlbks\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.983383 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:00 crc kubenswrapper[4482]: I1125 07:05:00.986987 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "22ad88d8-cb8a-4137-b2d6-f8e787a1526b" (UID: "22ad88d8-cb8a-4137-b2d6-f8e787a1526b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.021649 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "22ad88d8-cb8a-4137-b2d6-f8e787a1526b" (UID: "22ad88d8-cb8a-4137-b2d6-f8e787a1526b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.024404 4482 scope.go:117] "RemoveContainer" containerID="f076f517b51ebb8e0ad7d429ff2f80da111c21197a57cdf0d5e15e6205b71841" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.025618 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "22ad88d8-cb8a-4137-b2d6-f8e787a1526b" (UID: "22ad88d8-cb8a-4137-b2d6-f8e787a1526b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.053887 4482 scope.go:117] "RemoveContainer" containerID="1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3" Nov 25 07:05:01 crc kubenswrapper[4482]: E1125 07:05:01.057277 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3\": container with ID starting with 1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3 not found: ID does not exist" containerID="1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.057321 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3"} err="failed to get container status \"1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3\": rpc error: code = NotFound desc = could not find container \"1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3\": container with ID starting with 1d8100df2ff1c8fc7cb6c8eb4fa7b81293b28c95ba40e747a682df17ca2f74e3 not found: ID does not exist" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.057346 4482 scope.go:117] "RemoveContainer" containerID="f076f517b51ebb8e0ad7d429ff2f80da111c21197a57cdf0d5e15e6205b71841" Nov 25 07:05:01 crc kubenswrapper[4482]: E1125 07:05:01.059106 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f076f517b51ebb8e0ad7d429ff2f80da111c21197a57cdf0d5e15e6205b71841\": container with ID starting with f076f517b51ebb8e0ad7d429ff2f80da111c21197a57cdf0d5e15e6205b71841 not found: ID does not exist" containerID="f076f517b51ebb8e0ad7d429ff2f80da111c21197a57cdf0d5e15e6205b71841" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.059138 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f076f517b51ebb8e0ad7d429ff2f80da111c21197a57cdf0d5e15e6205b71841"} err="failed to get container status \"f076f517b51ebb8e0ad7d429ff2f80da111c21197a57cdf0d5e15e6205b71841\": rpc error: code = NotFound desc = could not find container \"f076f517b51ebb8e0ad7d429ff2f80da111c21197a57cdf0d5e15e6205b71841\": container with ID starting with f076f517b51ebb8e0ad7d429ff2f80da111c21197a57cdf0d5e15e6205b71841 not found: ID does not exist" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.085317 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.085339 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.085350 4482 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.175837 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "22ad88d8-cb8a-4137-b2d6-f8e787a1526b" (UID: "22ad88d8-cb8a-4137-b2d6-f8e787a1526b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.187412 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22ad88d8-cb8a-4137-b2d6-f8e787a1526b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.292992 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58d8d55fc5-62wcn"] Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.299927 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58d8d55fc5-62wcn"] Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.649158 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-vddpr"] Nov 25 07:05:01 crc kubenswrapper[4482]: E1125 07:05:01.649847 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22ad88d8-cb8a-4137-b2d6-f8e787a1526b" containerName="init" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.649868 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ad88d8-cb8a-4137-b2d6-f8e787a1526b" containerName="init" Nov 25 07:05:01 crc kubenswrapper[4482]: E1125 07:05:01.649897 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22ad88d8-cb8a-4137-b2d6-f8e787a1526b" containerName="dnsmasq-dns" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.649902 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ad88d8-cb8a-4137-b2d6-f8e787a1526b" containerName="dnsmasq-dns" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.650116 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="22ad88d8-cb8a-4137-b2d6-f8e787a1526b" containerName="dnsmasq-dns" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.650758 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.659869 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.659889 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.665243 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vddpr"] Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.699654 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-config-data\") pod \"nova-cell0-cell-mapping-vddpr\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.699901 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28tnr\" (UniqueName: \"kubernetes.io/projected/1909a799-3429-4fe2-adca-d756ae0c7c59-kube-api-access-28tnr\") pod \"nova-cell0-cell-mapping-vddpr\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.699975 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-scripts\") pod \"nova-cell0-cell-mapping-vddpr\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.700016 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vddpr\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.801774 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28tnr\" (UniqueName: \"kubernetes.io/projected/1909a799-3429-4fe2-adca-d756ae0c7c59-kube-api-access-28tnr\") pod \"nova-cell0-cell-mapping-vddpr\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.801894 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-scripts\") pod \"nova-cell0-cell-mapping-vddpr\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.801946 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vddpr\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.802100 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-config-data\") pod \"nova-cell0-cell-mapping-vddpr\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.812855 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vddpr\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.833992 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-scripts\") pod \"nova-cell0-cell-mapping-vddpr\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.834701 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-config-data\") pod \"nova-cell0-cell-mapping-vddpr\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.853322 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28tnr\" (UniqueName: \"kubernetes.io/projected/1909a799-3429-4fe2-adca-d756ae0c7c59-kube-api-access-28tnr\") pod \"nova-cell0-cell-mapping-vddpr\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.855523 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22ad88d8-cb8a-4137-b2d6-f8e787a1526b" path="/var/lib/kubelet/pods/22ad88d8-cb8a-4137-b2d6-f8e787a1526b/volumes" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.867306 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.868575 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.875966 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.908363 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.932879 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.941119 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.943568 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.965645 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:01 crc kubenswrapper[4482]: I1125 07:05:01.983080 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.002900 4482 generic.go:334] "Generic (PLEG): container finished" podID="e5094767-b47d-4a62-9675-df093cdb0356" containerID="e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79" exitCode=0 Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.002946 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5094767-b47d-4a62-9675-df093cdb0356","Type":"ContainerDied","Data":"e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79"} Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.010472 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d227e6f6-3610-4db4-a5d1-b60bb5285194-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d227e6f6-3610-4db4-a5d1-b60bb5285194\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.010561 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltc4r\" (UniqueName: \"kubernetes.io/projected/d227e6f6-3610-4db4-a5d1-b60bb5285194-kube-api-access-ltc4r\") pod \"nova-cell1-novncproxy-0\" (UID: \"d227e6f6-3610-4db4-a5d1-b60bb5285194\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.010605 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d227e6f6-3610-4db4-a5d1-b60bb5285194-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d227e6f6-3610-4db4-a5d1-b60bb5285194\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.044969 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.049838 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.060739 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.115122 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpspz\" (UniqueName: \"kubernetes.io/projected/39bf8ee9-d19f-43ab-8262-79538e4d1422-kube-api-access-wpspz\") pod \"nova-api-0\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.118604 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bf8ee9-d19f-43ab-8262-79538e4d1422-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.118749 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d227e6f6-3610-4db4-a5d1-b60bb5285194-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d227e6f6-3610-4db4-a5d1-b60bb5285194\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.118936 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltc4r\" (UniqueName: \"kubernetes.io/projected/d227e6f6-3610-4db4-a5d1-b60bb5285194-kube-api-access-ltc4r\") pod \"nova-cell1-novncproxy-0\" (UID: \"d227e6f6-3610-4db4-a5d1-b60bb5285194\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.119097 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d227e6f6-3610-4db4-a5d1-b60bb5285194-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d227e6f6-3610-4db4-a5d1-b60bb5285194\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.119228 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bf8ee9-d19f-43ab-8262-79538e4d1422-logs\") pod \"nova-api-0\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.119342 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bf8ee9-d19f-43ab-8262-79538e4d1422-config-data\") pod \"nova-api-0\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.127048 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d227e6f6-3610-4db4-a5d1-b60bb5285194-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d227e6f6-3610-4db4-a5d1-b60bb5285194\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.163260 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d227e6f6-3610-4db4-a5d1-b60bb5285194-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d227e6f6-3610-4db4-a5d1-b60bb5285194\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.189670 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltc4r\" (UniqueName: \"kubernetes.io/projected/d227e6f6-3610-4db4-a5d1-b60bb5285194-kube-api-access-ltc4r\") pod \"nova-cell1-novncproxy-0\" (UID: \"d227e6f6-3610-4db4-a5d1-b60bb5285194\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.210919 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.217534 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.221270 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bf8ee9-d19f-43ab-8262-79538e4d1422-logs\") pod \"nova-api-0\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.241352 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bf8ee9-d19f-43ab-8262-79538e4d1422-config-data\") pod \"nova-api-0\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.241495 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/791a624e-c2f0-46e9-aec4-9d93db804972-logs\") pod \"nova-metadata-0\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.241581 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpspz\" (UniqueName: \"kubernetes.io/projected/39bf8ee9-d19f-43ab-8262-79538e4d1422-kube-api-access-wpspz\") pod \"nova-api-0\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.241697 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqt7j\" (UniqueName: \"kubernetes.io/projected/791a624e-c2f0-46e9-aec4-9d93db804972-kube-api-access-mqt7j\") pod \"nova-metadata-0\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.241898 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bf8ee9-d19f-43ab-8262-79538e4d1422-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.241956 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791a624e-c2f0-46e9-aec4-9d93db804972-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.242054 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791a624e-c2f0-46e9-aec4-9d93db804972-config-data\") pod \"nova-metadata-0\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.228304 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bf8ee9-d19f-43ab-8262-79538e4d1422-logs\") pod \"nova-api-0\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.251746 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bf8ee9-d19f-43ab-8262-79538e4d1422-config-data\") pod \"nova-api-0\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.256231 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.266047 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.269102 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bf8ee9-d19f-43ab-8262-79538e4d1422-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.277837 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.291317 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpspz\" (UniqueName: \"kubernetes.io/projected/39bf8ee9-d19f-43ab-8262-79538e4d1422-kube-api-access-wpspz\") pod \"nova-api-0\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.295438 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c75cdbd45-cj9pn"] Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.297330 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.346383 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/791a624e-c2f0-46e9-aec4-9d93db804972-logs\") pod \"nova-metadata-0\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.346468 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqt7j\" (UniqueName: \"kubernetes.io/projected/791a624e-c2f0-46e9-aec4-9d93db804972-kube-api-access-mqt7j\") pod \"nova-metadata-0\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.346564 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791a624e-c2f0-46e9-aec4-9d93db804972-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.346607 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791a624e-c2f0-46e9-aec4-9d93db804972-config-data\") pod \"nova-metadata-0\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.346733 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/791a624e-c2f0-46e9-aec4-9d93db804972-logs\") pod \"nova-metadata-0\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.360126 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791a624e-c2f0-46e9-aec4-9d93db804972-config-data\") pod \"nova-metadata-0\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.360203 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.367662 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791a624e-c2f0-46e9-aec4-9d93db804972-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.380837 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c75cdbd45-cj9pn"] Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.435305 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqt7j\" (UniqueName: \"kubernetes.io/projected/791a624e-c2f0-46e9-aec4-9d93db804972-kube-api-access-mqt7j\") pod \"nova-metadata-0\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.455142 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.455309 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvjst\" (UniqueName: \"kubernetes.io/projected/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-kube-api-access-qvjst\") pod \"nova-scheduler-0\" (UID: \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.455393 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rwmg\" (UniqueName: \"kubernetes.io/projected/19cf9dd3-f468-4483-8b4e-59a40245b45e-kube-api-access-8rwmg\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.455436 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-dns-svc\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.455539 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-ovsdbserver-nb\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.455587 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-ovsdbserver-sb\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.455648 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-config-data\") pod \"nova-scheduler-0\" (UID: \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.455676 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-config\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.455736 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-dns-swift-storage-0\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.557436 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-ovsdbserver-sb\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.557515 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-config-data\") pod \"nova-scheduler-0\" (UID: \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.557553 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-config\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.557591 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-dns-swift-storage-0\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.557689 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.557794 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvjst\" (UniqueName: \"kubernetes.io/projected/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-kube-api-access-qvjst\") pod \"nova-scheduler-0\" (UID: \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.557859 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rwmg\" (UniqueName: \"kubernetes.io/projected/19cf9dd3-f468-4483-8b4e-59a40245b45e-kube-api-access-8rwmg\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.557878 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-dns-svc\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.557966 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-ovsdbserver-nb\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.558824 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-ovsdbserver-nb\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.558985 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-dns-swift-storage-0\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.559576 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-config\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.560065 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-ovsdbserver-sb\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.560952 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-dns-svc\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.568023 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.571658 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.587701 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-config-data\") pod \"nova-scheduler-0\" (UID: \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.588396 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvjst\" (UniqueName: \"kubernetes.io/projected/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-kube-api-access-qvjst\") pod \"nova-scheduler-0\" (UID: \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.588896 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rwmg\" (UniqueName: \"kubernetes.io/projected/19cf9dd3-f468-4483-8b4e-59a40245b45e-kube-api-access-8rwmg\") pod \"dnsmasq-dns-6c75cdbd45-cj9pn\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.634897 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.646471 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.677353 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.677849 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vddpr"] Nov 25 07:05:02 crc kubenswrapper[4482]: I1125 07:05:02.915412 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.023958 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vddpr" event={"ID":"1909a799-3429-4fe2-adca-d756ae0c7c59","Type":"ContainerStarted","Data":"36e30b1036b27cdf886ce3050abc28aad81ea5e52842065513fe93f49c2a0094"} Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.025505 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d227e6f6-3610-4db4-a5d1-b60bb5285194","Type":"ContainerStarted","Data":"a1b995a7250703bbe5ad9caa8eb1feb37e30e8d081c37cbc6a412d8ab551e68a"} Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.181228 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.421142 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c75cdbd45-cj9pn"] Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.502355 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.589158 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.798290 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwzh2"] Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.799980 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.804393 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.804587 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.822907 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.825831 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwzh2"] Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.918987 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtpf9\" (UniqueName: \"kubernetes.io/projected/e5094767-b47d-4a62-9675-df093cdb0356-kube-api-access-gtpf9\") pod \"e5094767-b47d-4a62-9675-df093cdb0356\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.920095 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-combined-ca-bundle\") pod \"e5094767-b47d-4a62-9675-df093cdb0356\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.920372 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-scripts\") pod \"e5094767-b47d-4a62-9675-df093cdb0356\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.920746 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5094767-b47d-4a62-9675-df093cdb0356-etc-machine-id\") pod \"e5094767-b47d-4a62-9675-df093cdb0356\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.920871 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-config-data-custom\") pod \"e5094767-b47d-4a62-9675-df093cdb0356\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.920942 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-config-data\") pod \"e5094767-b47d-4a62-9675-df093cdb0356\" (UID: \"e5094767-b47d-4a62-9675-df093cdb0356\") " Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.921024 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5094767-b47d-4a62-9675-df093cdb0356-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e5094767-b47d-4a62-9675-df093cdb0356" (UID: "e5094767-b47d-4a62-9675-df093cdb0356"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.921543 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5vpg\" (UniqueName: \"kubernetes.io/projected/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-kube-api-access-v5vpg\") pod \"nova-cell1-conductor-db-sync-zwzh2\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.921637 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-scripts\") pod \"nova-cell1-conductor-db-sync-zwzh2\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.921679 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zwzh2\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.921775 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-config-data\") pod \"nova-cell1-conductor-db-sync-zwzh2\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.921878 4482 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5094767-b47d-4a62-9675-df093cdb0356-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.932813 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5094767-b47d-4a62-9675-df093cdb0356-kube-api-access-gtpf9" (OuterVolumeSpecName: "kube-api-access-gtpf9") pod "e5094767-b47d-4a62-9675-df093cdb0356" (UID: "e5094767-b47d-4a62-9675-df093cdb0356"). InnerVolumeSpecName "kube-api-access-gtpf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.958743 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5094767-b47d-4a62-9675-df093cdb0356" (UID: "e5094767-b47d-4a62-9675-df093cdb0356"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:03 crc kubenswrapper[4482]: I1125 07:05:03.962285 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-scripts" (OuterVolumeSpecName: "scripts") pod "e5094767-b47d-4a62-9675-df093cdb0356" (UID: "e5094767-b47d-4a62-9675-df093cdb0356"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.013119 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.018753 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5094767-b47d-4a62-9675-df093cdb0356" (UID: "e5094767-b47d-4a62-9675-df093cdb0356"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.026581 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5vpg\" (UniqueName: \"kubernetes.io/projected/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-kube-api-access-v5vpg\") pod \"nova-cell1-conductor-db-sync-zwzh2\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.026667 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-scripts\") pod \"nova-cell1-conductor-db-sync-zwzh2\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.026694 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zwzh2\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.026811 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-config-data\") pod \"nova-cell1-conductor-db-sync-zwzh2\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.026925 4482 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.026937 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtpf9\" (UniqueName: \"kubernetes.io/projected/e5094767-b47d-4a62-9675-df093cdb0356-kube-api-access-gtpf9\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.026948 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.026960 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.053921 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zwzh2\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.054711 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-config-data\") pod \"nova-cell1-conductor-db-sync-zwzh2\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.071647 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5vpg\" (UniqueName: \"kubernetes.io/projected/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-kube-api-access-v5vpg\") pod \"nova-cell1-conductor-db-sync-zwzh2\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.098608 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-scripts\") pod \"nova-cell1-conductor-db-sync-zwzh2\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.140196 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.153339 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"791a624e-c2f0-46e9-aec4-9d93db804972","Type":"ContainerStarted","Data":"ab0e516cec2ba019082f7d8bd18b38ac097b855d08099ab37160b4c3b8378f4a"} Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.172520 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vddpr" event={"ID":"1909a799-3429-4fe2-adca-d756ae0c7c59","Type":"ContainerStarted","Data":"9dfc79e9ca51e0b4abf83b05a54ac2273275d7193b81548cec98fdbf415d0864"} Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.188415 4482 generic.go:334] "Generic (PLEG): container finished" podID="e5094767-b47d-4a62-9675-df093cdb0356" containerID="b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e" exitCode=0 Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.188555 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.188871 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5094767-b47d-4a62-9675-df093cdb0356","Type":"ContainerDied","Data":"b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e"} Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.188913 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5094767-b47d-4a62-9675-df093cdb0356","Type":"ContainerDied","Data":"c818c497b128e64eb35632daf7bea4ec3970d713dce9bc11c82219d8763798d8"} Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.188932 4482 scope.go:117] "RemoveContainer" containerID="e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.211444 4482 generic.go:334] "Generic (PLEG): container finished" podID="19cf9dd3-f468-4483-8b4e-59a40245b45e" containerID="bdf45347ddc47e44764c69fdd6a4d53af10c6bda3c63f7eb0460369fc2b81490" exitCode=0 Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.211500 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" event={"ID":"19cf9dd3-f468-4483-8b4e-59a40245b45e","Type":"ContainerDied","Data":"bdf45347ddc47e44764c69fdd6a4d53af10c6bda3c63f7eb0460369fc2b81490"} Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.211520 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" event={"ID":"19cf9dd3-f468-4483-8b4e-59a40245b45e","Type":"ContainerStarted","Data":"69f026a05069bafa56fc4a8424c0743917372974f61d06c5915cdb67185e2011"} Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.212995 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-vddpr" podStartSLOduration=3.212978973 podStartE2EDuration="3.212978973s" podCreationTimestamp="2025-11-25 07:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:04.194001879 +0000 UTC m=+1078.682233128" watchObservedRunningTime="2025-11-25 07:05:04.212978973 +0000 UTC m=+1078.701210223" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.214261 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-config-data" (OuterVolumeSpecName: "config-data") pod "e5094767-b47d-4a62-9675-df093cdb0356" (UID: "e5094767-b47d-4a62-9675-df093cdb0356"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.222221 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0295ea9f-b4e8-435d-9c64-e0c02c3defa9","Type":"ContainerStarted","Data":"14a1a630676e63ef7d3ff062c156a1d363a5d64cd9741249df26095d69e2d3e9"} Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.255949 4482 scope.go:117] "RemoveContainer" containerID="b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.258327 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"39bf8ee9-d19f-43ab-8262-79538e4d1422","Type":"ContainerStarted","Data":"3dda7a887d28883769c372e184a742da77d2821ca1d9081930ebf900fc80f897"} Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.261219 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5094767-b47d-4a62-9675-df093cdb0356-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.384643 4482 scope.go:117] "RemoveContainer" containerID="e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79" Nov 25 07:05:04 crc kubenswrapper[4482]: E1125 07:05:04.395504 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79\": container with ID starting with e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79 not found: ID does not exist" containerID="e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.395565 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79"} err="failed to get container status \"e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79\": rpc error: code = NotFound desc = could not find container \"e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79\": container with ID starting with e38bb34d2bf3a6ff99020a94df1689ee204ee3abcea289f5ea6dd65ed2f5cb79 not found: ID does not exist" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.395591 4482 scope.go:117] "RemoveContainer" containerID="b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e" Nov 25 07:05:04 crc kubenswrapper[4482]: E1125 07:05:04.416026 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e\": container with ID starting with b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e not found: ID does not exist" containerID="b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.416096 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e"} err="failed to get container status \"b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e\": rpc error: code = NotFound desc = could not find container \"b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e\": container with ID starting with b5a5f4145ed24fedd619b4c0b5f084d48d244bb59b7a75e74e734973020b115e not found: ID does not exist" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.606413 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.637474 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.657473 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:05:04 crc kubenswrapper[4482]: E1125 07:05:04.658399 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5094767-b47d-4a62-9675-df093cdb0356" containerName="cinder-scheduler" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.658417 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5094767-b47d-4a62-9675-df093cdb0356" containerName="cinder-scheduler" Nov 25 07:05:04 crc kubenswrapper[4482]: E1125 07:05:04.658440 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5094767-b47d-4a62-9675-df093cdb0356" containerName="probe" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.658446 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5094767-b47d-4a62-9675-df093cdb0356" containerName="probe" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.658622 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5094767-b47d-4a62-9675-df093cdb0356" containerName="probe" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.658638 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5094767-b47d-4a62-9675-df093cdb0356" containerName="cinder-scheduler" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.659686 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.665591 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.715227 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:05:04 crc kubenswrapper[4482]: E1125 07:05:04.728427 4482 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="08d1da05c3910796afa7506712e18f571090b9d1e1d10ddfdc0f55109287b8c3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 07:05:04 crc kubenswrapper[4482]: E1125 07:05:04.729580 4482 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="08d1da05c3910796afa7506712e18f571090b9d1e1d10ddfdc0f55109287b8c3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 07:05:04 crc kubenswrapper[4482]: E1125 07:05:04.739477 4482 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="08d1da05c3910796afa7506712e18f571090b9d1e1d10ddfdc0f55109287b8c3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 07:05:04 crc kubenswrapper[4482]: E1125 07:05:04.739523 4482 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6fccbbd848-gp8qx" podUID="5bda1dfd-9f8b-4fbd-8093-689b7afada79" containerName="heat-engine" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.815507 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56352bce-6a1b-4fc3-9493-26a08448b3e9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.815565 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56352bce-6a1b-4fc3-9493-26a08448b3e9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.815725 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56352bce-6a1b-4fc3-9493-26a08448b3e9-scripts\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.815763 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wctn8\" (UniqueName: \"kubernetes.io/projected/56352bce-6a1b-4fc3-9493-26a08448b3e9-kube-api-access-wctn8\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.816131 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56352bce-6a1b-4fc3-9493-26a08448b3e9-config-data\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.816190 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56352bce-6a1b-4fc3-9493-26a08448b3e9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.918867 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56352bce-6a1b-4fc3-9493-26a08448b3e9-scripts\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.918910 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wctn8\" (UniqueName: \"kubernetes.io/projected/56352bce-6a1b-4fc3-9493-26a08448b3e9-kube-api-access-wctn8\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.918967 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56352bce-6a1b-4fc3-9493-26a08448b3e9-config-data\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.918995 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56352bce-6a1b-4fc3-9493-26a08448b3e9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.919095 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56352bce-6a1b-4fc3-9493-26a08448b3e9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.919114 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56352bce-6a1b-4fc3-9493-26a08448b3e9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.919606 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56352bce-6a1b-4fc3-9493-26a08448b3e9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.926872 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56352bce-6a1b-4fc3-9493-26a08448b3e9-scripts\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.938505 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56352bce-6a1b-4fc3-9493-26a08448b3e9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.939153 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56352bce-6a1b-4fc3-9493-26a08448b3e9-config-data\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.954034 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wctn8\" (UniqueName: \"kubernetes.io/projected/56352bce-6a1b-4fc3-9493-26a08448b3e9-kube-api-access-wctn8\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.962897 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56352bce-6a1b-4fc3-9493-26a08448b3e9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"56352bce-6a1b-4fc3-9493-26a08448b3e9\") " pod="openstack/cinder-scheduler-0" Nov 25 07:05:04 crc kubenswrapper[4482]: I1125 07:05:04.987777 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwzh2"] Nov 25 07:05:05 crc kubenswrapper[4482]: I1125 07:05:05.013457 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 07:05:05 crc kubenswrapper[4482]: I1125 07:05:05.286633 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zwzh2" event={"ID":"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d","Type":"ContainerStarted","Data":"1d227f12f715ada56621acaf235639a1352f5447a7ee5405cc91d61b86de71be"} Nov 25 07:05:05 crc kubenswrapper[4482]: I1125 07:05:05.319034 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" event={"ID":"19cf9dd3-f468-4483-8b4e-59a40245b45e","Type":"ContainerStarted","Data":"5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294"} Nov 25 07:05:05 crc kubenswrapper[4482]: I1125 07:05:05.366846 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" podStartSLOduration=3.36682852 podStartE2EDuration="3.36682852s" podCreationTimestamp="2025-11-25 07:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:05.349704679 +0000 UTC m=+1079.837935938" watchObservedRunningTime="2025-11-25 07:05:05.36682852 +0000 UTC m=+1079.855059779" Nov 25 07:05:05 crc kubenswrapper[4482]: I1125 07:05:05.756230 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:05 crc kubenswrapper[4482]: I1125 07:05:05.763802 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:05:05 crc kubenswrapper[4482]: I1125 07:05:05.813700 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:05:05 crc kubenswrapper[4482]: I1125 07:05:05.969648 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5094767-b47d-4a62-9675-df093cdb0356" path="/var/lib/kubelet/pods/e5094767-b47d-4a62-9675-df093cdb0356/volumes" Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.379938 4482 generic.go:334] "Generic (PLEG): container finished" podID="5bda1dfd-9f8b-4fbd-8093-689b7afada79" containerID="08d1da05c3910796afa7506712e18f571090b9d1e1d10ddfdc0f55109287b8c3" exitCode=0 Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.380049 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6fccbbd848-gp8qx" event={"ID":"5bda1dfd-9f8b-4fbd-8093-689b7afada79","Type":"ContainerDied","Data":"08d1da05c3910796afa7506712e18f571090b9d1e1d10ddfdc0f55109287b8c3"} Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.385558 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zwzh2" event={"ID":"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d","Type":"ContainerStarted","Data":"dc337e694aff42f5f1e50941d1fc9763e0bb538c31efd27659ef20f62153f7e9"} Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.421532 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"56352bce-6a1b-4fc3-9493-26a08448b3e9","Type":"ContainerStarted","Data":"51299b0996952a5d5d95324b9d5249d0b3dea21dc8a38dfa565d70b2de3f872c"} Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.421589 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.437876 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zwzh2" podStartSLOduration=3.437862296 podStartE2EDuration="3.437862296s" podCreationTimestamp="2025-11-25 07:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:06.429774609 +0000 UTC m=+1080.918005868" watchObservedRunningTime="2025-11-25 07:05:06.437862296 +0000 UTC m=+1080.926093546" Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.683002 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.803384 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-config-data\") pod \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.803500 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-config-data-custom\") pod \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.803698 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-combined-ca-bundle\") pod \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.803739 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5b9v\" (UniqueName: \"kubernetes.io/projected/5bda1dfd-9f8b-4fbd-8093-689b7afada79-kube-api-access-n5b9v\") pod \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\" (UID: \"5bda1dfd-9f8b-4fbd-8093-689b7afada79\") " Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.836364 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bda1dfd-9f8b-4fbd-8093-689b7afada79-kube-api-access-n5b9v" (OuterVolumeSpecName: "kube-api-access-n5b9v") pod "5bda1dfd-9f8b-4fbd-8093-689b7afada79" (UID: "5bda1dfd-9f8b-4fbd-8093-689b7afada79"). InnerVolumeSpecName "kube-api-access-n5b9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.836391 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5bda1dfd-9f8b-4fbd-8093-689b7afada79" (UID: "5bda1dfd-9f8b-4fbd-8093-689b7afada79"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.842801 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bda1dfd-9f8b-4fbd-8093-689b7afada79" (UID: "5bda1dfd-9f8b-4fbd-8093-689b7afada79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.907361 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.907388 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5b9v\" (UniqueName: \"kubernetes.io/projected/5bda1dfd-9f8b-4fbd-8093-689b7afada79-kube-api-access-n5b9v\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.907400 4482 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:06 crc kubenswrapper[4482]: I1125 07:05:06.944293 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-config-data" (OuterVolumeSpecName: "config-data") pod "5bda1dfd-9f8b-4fbd-8093-689b7afada79" (UID: "5bda1dfd-9f8b-4fbd-8093-689b7afada79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:07 crc kubenswrapper[4482]: I1125 07:05:07.009956 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bda1dfd-9f8b-4fbd-8093-689b7afada79-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:07 crc kubenswrapper[4482]: I1125 07:05:07.434063 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6fccbbd848-gp8qx" event={"ID":"5bda1dfd-9f8b-4fbd-8093-689b7afada79","Type":"ContainerDied","Data":"362675109e4ea32204b4fc54868afd10dc002831e54cfd158d67dab1ddd35e08"} Nov 25 07:05:07 crc kubenswrapper[4482]: I1125 07:05:07.434264 4482 scope.go:117] "RemoveContainer" containerID="08d1da05c3910796afa7506712e18f571090b9d1e1d10ddfdc0f55109287b8c3" Nov 25 07:05:07 crc kubenswrapper[4482]: I1125 07:05:07.434269 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6fccbbd848-gp8qx" Nov 25 07:05:07 crc kubenswrapper[4482]: I1125 07:05:07.441808 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"56352bce-6a1b-4fc3-9493-26a08448b3e9","Type":"ContainerStarted","Data":"dd3aa647b08fb10d195c15c760e2a460f7e2caeeb9a30b56686ad240570ac24f"} Nov 25 07:05:07 crc kubenswrapper[4482]: I1125 07:05:07.466341 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6fccbbd848-gp8qx"] Nov 25 07:05:07 crc kubenswrapper[4482]: I1125 07:05:07.478867 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6fccbbd848-gp8qx"] Nov 25 07:05:07 crc kubenswrapper[4482]: I1125 07:05:07.576545 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:07 crc kubenswrapper[4482]: I1125 07:05:07.605442 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:05:07 crc kubenswrapper[4482]: I1125 07:05:07.611976 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 07:05:07 crc kubenswrapper[4482]: I1125 07:05:07.612161 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="dde6054d-7b3c-41ca-a16d-34693953644f" containerName="nova-cell0-conductor-conductor" containerID="cri-o://48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427" gracePeriod=30 Nov 25 07:05:07 crc kubenswrapper[4482]: I1125 07:05:07.843526 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bda1dfd-9f8b-4fbd-8093-689b7afada79" path="/var/lib/kubelet/pods/5bda1dfd-9f8b-4fbd-8093-689b7afada79/volumes" Nov 25 07:05:09 crc kubenswrapper[4482]: I1125 07:05:09.892553 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:09 crc kubenswrapper[4482]: I1125 07:05:09.893026 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="ceilometer-central-agent" containerID="cri-o://cc47653245d4c8b1f9dab090cfd50b473a9a2fbfab4c880d9f8c960e5b7e5530" gracePeriod=30 Nov 25 07:05:09 crc kubenswrapper[4482]: I1125 07:05:09.893465 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="proxy-httpd" containerID="cri-o://138e7b3fc78c7397997119aaff6facabe368ec544e7104fe981d97473c78da72" gracePeriod=30 Nov 25 07:05:09 crc kubenswrapper[4482]: I1125 07:05:09.893508 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="sg-core" containerID="cri-o://8ba44be81aca99bb30c5ed8b31eb8609112c090f0d1d0fe91c2b6c395d0ee672" gracePeriod=30 Nov 25 07:05:09 crc kubenswrapper[4482]: I1125 07:05:09.893549 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="ceilometer-notification-agent" containerID="cri-o://862b576c7d68825f91daaa8384fd3fd1f4032f205a1608bcd6f78f293b8d4c23" gracePeriod=30 Nov 25 07:05:09 crc kubenswrapper[4482]: I1125 07:05:09.914612 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.404398 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.531055 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dde6054d-7b3c-41ca-a16d-34693953644f-combined-ca-bundle\") pod \"dde6054d-7b3c-41ca-a16d-34693953644f\" (UID: \"dde6054d-7b3c-41ca-a16d-34693953644f\") " Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.531152 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45bmk\" (UniqueName: \"kubernetes.io/projected/dde6054d-7b3c-41ca-a16d-34693953644f-kube-api-access-45bmk\") pod \"dde6054d-7b3c-41ca-a16d-34693953644f\" (UID: \"dde6054d-7b3c-41ca-a16d-34693953644f\") " Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.535999 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dde6054d-7b3c-41ca-a16d-34693953644f-config-data\") pod \"dde6054d-7b3c-41ca-a16d-34693953644f\" (UID: \"dde6054d-7b3c-41ca-a16d-34693953644f\") " Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.539368 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"791a624e-c2f0-46e9-aec4-9d93db804972","Type":"ContainerStarted","Data":"cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519"} Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.547943 4482 generic.go:334] "Generic (PLEG): container finished" podID="923dd3f7-190f-4715-a057-3eb83c260918" containerID="138e7b3fc78c7397997119aaff6facabe368ec544e7104fe981d97473c78da72" exitCode=0 Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.547970 4482 generic.go:334] "Generic (PLEG): container finished" podID="923dd3f7-190f-4715-a057-3eb83c260918" containerID="8ba44be81aca99bb30c5ed8b31eb8609112c090f0d1d0fe91c2b6c395d0ee672" exitCode=2 Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.548007 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923dd3f7-190f-4715-a057-3eb83c260918","Type":"ContainerDied","Data":"138e7b3fc78c7397997119aaff6facabe368ec544e7104fe981d97473c78da72"} Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.548208 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923dd3f7-190f-4715-a057-3eb83c260918","Type":"ContainerDied","Data":"8ba44be81aca99bb30c5ed8b31eb8609112c090f0d1d0fe91c2b6c395d0ee672"} Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.554255 4482 generic.go:334] "Generic (PLEG): container finished" podID="dde6054d-7b3c-41ca-a16d-34693953644f" containerID="48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427" exitCode=0 Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.554300 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dde6054d-7b3c-41ca-a16d-34693953644f","Type":"ContainerDied","Data":"48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427"} Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.554317 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dde6054d-7b3c-41ca-a16d-34693953644f","Type":"ContainerDied","Data":"6bd859e8098c5d69f36d76fdd793a813fe0caf522df1bdff50d6bb8c58f6631b"} Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.554334 4482 scope.go:117] "RemoveContainer" containerID="48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427" Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.554428 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.562472 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde6054d-7b3c-41ca-a16d-34693953644f-kube-api-access-45bmk" (OuterVolumeSpecName: "kube-api-access-45bmk") pod "dde6054d-7b3c-41ca-a16d-34693953644f" (UID: "dde6054d-7b3c-41ca-a16d-34693953644f"). InnerVolumeSpecName "kube-api-access-45bmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.599828 4482 scope.go:117] "RemoveContainer" containerID="48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427" Nov 25 07:05:10 crc kubenswrapper[4482]: E1125 07:05:10.602260 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427\": container with ID starting with 48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427 not found: ID does not exist" containerID="48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427" Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.602328 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427"} err="failed to get container status \"48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427\": rpc error: code = NotFound desc = could not find container \"48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427\": container with ID starting with 48e66b3aca93b1568b0625d9cc2c1d27010861aa2e80678c4840e0a33c488427 not found: ID does not exist" Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.639320 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45bmk\" (UniqueName: \"kubernetes.io/projected/dde6054d-7b3c-41ca-a16d-34693953644f-kube-api-access-45bmk\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.757037 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde6054d-7b3c-41ca-a16d-34693953644f-config-data" (OuterVolumeSpecName: "config-data") pod "dde6054d-7b3c-41ca-a16d-34693953644f" (UID: "dde6054d-7b3c-41ca-a16d-34693953644f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.778219 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde6054d-7b3c-41ca-a16d-34693953644f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dde6054d-7b3c-41ca-a16d-34693953644f" (UID: "dde6054d-7b3c-41ca-a16d-34693953644f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.843969 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dde6054d-7b3c-41ca-a16d-34693953644f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:10 crc kubenswrapper[4482]: I1125 07:05:10.843998 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dde6054d-7b3c-41ca-a16d-34693953644f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.025266 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.049660 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.059012 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 07:05:11 crc kubenswrapper[4482]: E1125 07:05:11.065264 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bda1dfd-9f8b-4fbd-8093-689b7afada79" containerName="heat-engine" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.065287 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bda1dfd-9f8b-4fbd-8093-689b7afada79" containerName="heat-engine" Nov 25 07:05:11 crc kubenswrapper[4482]: E1125 07:05:11.065325 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde6054d-7b3c-41ca-a16d-34693953644f" containerName="nova-cell0-conductor-conductor" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.065332 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde6054d-7b3c-41ca-a16d-34693953644f" containerName="nova-cell0-conductor-conductor" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.065597 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde6054d-7b3c-41ca-a16d-34693953644f" containerName="nova-cell0-conductor-conductor" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.065619 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bda1dfd-9f8b-4fbd-8093-689b7afada79" containerName="heat-engine" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.066537 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.066685 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.072028 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.151716 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2st7\" (UniqueName: \"kubernetes.io/projected/b1a45f03-7f92-428d-8cac-f0b98c637133-kube-api-access-q2st7\") pod \"nova-cell0-conductor-0\" (UID: \"b1a45f03-7f92-428d-8cac-f0b98c637133\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.151865 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a45f03-7f92-428d-8cac-f0b98c637133-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b1a45f03-7f92-428d-8cac-f0b98c637133\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.152030 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a45f03-7f92-428d-8cac-f0b98c637133-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b1a45f03-7f92-428d-8cac-f0b98c637133\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.255401 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2st7\" (UniqueName: \"kubernetes.io/projected/b1a45f03-7f92-428d-8cac-f0b98c637133-kube-api-access-q2st7\") pod \"nova-cell0-conductor-0\" (UID: \"b1a45f03-7f92-428d-8cac-f0b98c637133\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.255595 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a45f03-7f92-428d-8cac-f0b98c637133-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b1a45f03-7f92-428d-8cac-f0b98c637133\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.255780 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a45f03-7f92-428d-8cac-f0b98c637133-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b1a45f03-7f92-428d-8cac-f0b98c637133\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.261949 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a45f03-7f92-428d-8cac-f0b98c637133-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b1a45f03-7f92-428d-8cac-f0b98c637133\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.276254 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2st7\" (UniqueName: \"kubernetes.io/projected/b1a45f03-7f92-428d-8cac-f0b98c637133-kube-api-access-q2st7\") pod \"nova-cell0-conductor-0\" (UID: \"b1a45f03-7f92-428d-8cac-f0b98c637133\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.278923 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a45f03-7f92-428d-8cac-f0b98c637133-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b1a45f03-7f92-428d-8cac-f0b98c637133\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.406120 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.655404 4482 generic.go:334] "Generic (PLEG): container finished" podID="923dd3f7-190f-4715-a057-3eb83c260918" containerID="cc47653245d4c8b1f9dab090cfd50b473a9a2fbfab4c880d9f8c960e5b7e5530" exitCode=0 Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.655614 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923dd3f7-190f-4715-a057-3eb83c260918","Type":"ContainerDied","Data":"cc47653245d4c8b1f9dab090cfd50b473a9a2fbfab4c880d9f8c960e5b7e5530"} Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.700793 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d227e6f6-3610-4db4-a5d1-b60bb5285194","Type":"ContainerStarted","Data":"98b36de37d32104b8615e400e1fc197432e56a59afd50002c6951eeabcfa5ab4"} Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.701065 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d227e6f6-3610-4db4-a5d1-b60bb5285194" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://98b36de37d32104b8615e400e1fc197432e56a59afd50002c6951eeabcfa5ab4" gracePeriod=30 Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.739466 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0295ea9f-b4e8-435d-9c64-e0c02c3defa9","Type":"ContainerStarted","Data":"2da1b60b5c057ac5c7b37fd93a1120484a789e08e0bffd9cdca6af5cb535401a"} Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.739661 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0295ea9f-b4e8-435d-9c64-e0c02c3defa9" containerName="nova-scheduler-scheduler" containerID="cri-o://2da1b60b5c057ac5c7b37fd93a1120484a789e08e0bffd9cdca6af5cb535401a" gracePeriod=30 Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.739715 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.78744501 podStartE2EDuration="10.739695992s" podCreationTimestamp="2025-11-25 07:05:01 +0000 UTC" firstStartedPulling="2025-11-25 07:05:02.978451065 +0000 UTC m=+1077.466682323" lastFinishedPulling="2025-11-25 07:05:09.930702045 +0000 UTC m=+1084.418933305" observedRunningTime="2025-11-25 07:05:11.723530178 +0000 UTC m=+1086.211761437" watchObservedRunningTime="2025-11-25 07:05:11.739695992 +0000 UTC m=+1086.227927252" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.764527 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.416547448 podStartE2EDuration="9.764512345s" podCreationTimestamp="2025-11-25 07:05:02 +0000 UTC" firstStartedPulling="2025-11-25 07:05:03.582428046 +0000 UTC m=+1078.070659305" lastFinishedPulling="2025-11-25 07:05:09.930392943 +0000 UTC m=+1084.418624202" observedRunningTime="2025-11-25 07:05:11.760130845 +0000 UTC m=+1086.248362104" watchObservedRunningTime="2025-11-25 07:05:11.764512345 +0000 UTC m=+1086.252743594" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.780586 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"39bf8ee9-d19f-43ab-8262-79538e4d1422","Type":"ContainerStarted","Data":"0198bfe5c9119d7785a1bd39e8a46a89736c4cdc77ba3a72e716795107158c80"} Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.780695 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"39bf8ee9-d19f-43ab-8262-79538e4d1422","Type":"ContainerStarted","Data":"73e383b11f59d9437e5be6fb7f2c2d1124ac2d74ea9deee4887302df50adf16f"} Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.780923 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="39bf8ee9-d19f-43ab-8262-79538e4d1422" containerName="nova-api-log" containerID="cri-o://73e383b11f59d9437e5be6fb7f2c2d1124ac2d74ea9deee4887302df50adf16f" gracePeriod=30 Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.782324 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="39bf8ee9-d19f-43ab-8262-79538e4d1422" containerName="nova-api-api" containerID="cri-o://0198bfe5c9119d7785a1bd39e8a46a89736c4cdc77ba3a72e716795107158c80" gracePeriod=30 Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.810852 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"791a624e-c2f0-46e9-aec4-9d93db804972","Type":"ContainerStarted","Data":"729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542"} Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.811001 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="791a624e-c2f0-46e9-aec4-9d93db804972" containerName="nova-metadata-log" containerID="cri-o://cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519" gracePeriod=30 Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.811263 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="791a624e-c2f0-46e9-aec4-9d93db804972" containerName="nova-metadata-metadata" containerID="cri-o://729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542" gracePeriod=30 Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.819589 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.060271151 podStartE2EDuration="10.819577101s" podCreationTimestamp="2025-11-25 07:05:01 +0000 UTC" firstStartedPulling="2025-11-25 07:05:03.193567382 +0000 UTC m=+1077.681798641" lastFinishedPulling="2025-11-25 07:05:09.952873332 +0000 UTC m=+1084.441104591" observedRunningTime="2025-11-25 07:05:11.812861732 +0000 UTC m=+1086.301092991" watchObservedRunningTime="2025-11-25 07:05:11.819577101 +0000 UTC m=+1086.307808350" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.863307 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dde6054d-7b3c-41ca-a16d-34693953644f" path="/var/lib/kubelet/pods/dde6054d-7b3c-41ca-a16d-34693953644f/volumes" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.863930 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"56352bce-6a1b-4fc3-9493-26a08448b3e9","Type":"ContainerStarted","Data":"0af8fed01880112117f7a986fb47de816867ef88fa99ae807d7e8e27b07f0c87"} Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.892433 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.514378454 podStartE2EDuration="10.892411415s" podCreationTimestamp="2025-11-25 07:05:01 +0000 UTC" firstStartedPulling="2025-11-25 07:05:03.551639454 +0000 UTC m=+1078.039870713" lastFinishedPulling="2025-11-25 07:05:09.929672415 +0000 UTC m=+1084.417903674" observedRunningTime="2025-11-25 07:05:11.852252299 +0000 UTC m=+1086.340483558" watchObservedRunningTime="2025-11-25 07:05:11.892411415 +0000 UTC m=+1086.380642665" Nov 25 07:05:11 crc kubenswrapper[4482]: I1125 07:05:11.897410 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.897390611 podStartE2EDuration="7.897390611s" podCreationTimestamp="2025-11-25 07:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:11.882832196 +0000 UTC m=+1086.371063446" watchObservedRunningTime="2025-11-25 07:05:11.897390611 +0000 UTC m=+1086.385621871" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.024749 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.218567 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.628248 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.636062 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.651341 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.705296 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqt7j\" (UniqueName: \"kubernetes.io/projected/791a624e-c2f0-46e9-aec4-9d93db804972-kube-api-access-mqt7j\") pod \"791a624e-c2f0-46e9-aec4-9d93db804972\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.705449 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/791a624e-c2f0-46e9-aec4-9d93db804972-logs\") pod \"791a624e-c2f0-46e9-aec4-9d93db804972\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.705556 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791a624e-c2f0-46e9-aec4-9d93db804972-config-data\") pod \"791a624e-c2f0-46e9-aec4-9d93db804972\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.705575 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791a624e-c2f0-46e9-aec4-9d93db804972-combined-ca-bundle\") pod \"791a624e-c2f0-46e9-aec4-9d93db804972\" (UID: \"791a624e-c2f0-46e9-aec4-9d93db804972\") " Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.706799 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/791a624e-c2f0-46e9-aec4-9d93db804972-logs" (OuterVolumeSpecName: "logs") pod "791a624e-c2f0-46e9-aec4-9d93db804972" (UID: "791a624e-c2f0-46e9-aec4-9d93db804972"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.713807 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/791a624e-c2f0-46e9-aec4-9d93db804972-kube-api-access-mqt7j" (OuterVolumeSpecName: "kube-api-access-mqt7j") pod "791a624e-c2f0-46e9-aec4-9d93db804972" (UID: "791a624e-c2f0-46e9-aec4-9d93db804972"). InnerVolumeSpecName "kube-api-access-mqt7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.743315 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84dbcdd9df-95cth"] Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.743543 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" podUID="9189dc29-1a63-4e21-b4c6-066c86c6a7ab" containerName="dnsmasq-dns" containerID="cri-o://2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2" gracePeriod=10 Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.758821 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791a624e-c2f0-46e9-aec4-9d93db804972-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "791a624e-c2f0-46e9-aec4-9d93db804972" (UID: "791a624e-c2f0-46e9-aec4-9d93db804972"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.810082 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/791a624e-c2f0-46e9-aec4-9d93db804972-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.810116 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791a624e-c2f0-46e9-aec4-9d93db804972-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.810127 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqt7j\" (UniqueName: \"kubernetes.io/projected/791a624e-c2f0-46e9-aec4-9d93db804972-kube-api-access-mqt7j\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.811706 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791a624e-c2f0-46e9-aec4-9d93db804972-config-data" (OuterVolumeSpecName: "config-data") pod "791a624e-c2f0-46e9-aec4-9d93db804972" (UID: "791a624e-c2f0-46e9-aec4-9d93db804972"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.893325 4482 generic.go:334] "Generic (PLEG): container finished" podID="791a624e-c2f0-46e9-aec4-9d93db804972" containerID="729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542" exitCode=0 Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.893355 4482 generic.go:334] "Generic (PLEG): container finished" podID="791a624e-c2f0-46e9-aec4-9d93db804972" containerID="cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519" exitCode=143 Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.893399 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"791a624e-c2f0-46e9-aec4-9d93db804972","Type":"ContainerDied","Data":"729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542"} Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.893426 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"791a624e-c2f0-46e9-aec4-9d93db804972","Type":"ContainerDied","Data":"cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519"} Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.893439 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"791a624e-c2f0-46e9-aec4-9d93db804972","Type":"ContainerDied","Data":"ab0e516cec2ba019082f7d8bd18b38ac097b855d08099ab37160b4c3b8378f4a"} Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.893454 4482 scope.go:117] "RemoveContainer" containerID="729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.893571 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.904482 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b1a45f03-7f92-428d-8cac-f0b98c637133","Type":"ContainerStarted","Data":"5537885eec093856b6825125a1a280e8a84181cc6dc00317474cf90f7e83f49a"} Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.904502 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b1a45f03-7f92-428d-8cac-f0b98c637133","Type":"ContainerStarted","Data":"cc0f90675f414ad5e4cd7da8fb17e8a410097dea792f0a225f3944b7c916cb4f"} Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.905087 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.911948 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791a624e-c2f0-46e9-aec4-9d93db804972-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.930790 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.930780844 podStartE2EDuration="1.930780844s" podCreationTimestamp="2025-11-25 07:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:12.923541677 +0000 UTC m=+1087.411772926" watchObservedRunningTime="2025-11-25 07:05:12.930780844 +0000 UTC m=+1087.419012103" Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.939302 4482 generic.go:334] "Generic (PLEG): container finished" podID="923dd3f7-190f-4715-a057-3eb83c260918" containerID="862b576c7d68825f91daaa8384fd3fd1f4032f205a1608bcd6f78f293b8d4c23" exitCode=0 Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.939397 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923dd3f7-190f-4715-a057-3eb83c260918","Type":"ContainerDied","Data":"862b576c7d68825f91daaa8384fd3fd1f4032f205a1608bcd6f78f293b8d4c23"} Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.961258 4482 generic.go:334] "Generic (PLEG): container finished" podID="39bf8ee9-d19f-43ab-8262-79538e4d1422" containerID="0198bfe5c9119d7785a1bd39e8a46a89736c4cdc77ba3a72e716795107158c80" exitCode=0 Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.961295 4482 generic.go:334] "Generic (PLEG): container finished" podID="39bf8ee9-d19f-43ab-8262-79538e4d1422" containerID="73e383b11f59d9437e5be6fb7f2c2d1124ac2d74ea9deee4887302df50adf16f" exitCode=143 Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.962510 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"39bf8ee9-d19f-43ab-8262-79538e4d1422","Type":"ContainerDied","Data":"0198bfe5c9119d7785a1bd39e8a46a89736c4cdc77ba3a72e716795107158c80"} Nov 25 07:05:12 crc kubenswrapper[4482]: I1125 07:05:12.962541 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"39bf8ee9-d19f-43ab-8262-79538e4d1422","Type":"ContainerDied","Data":"73e383b11f59d9437e5be6fb7f2c2d1124ac2d74ea9deee4887302df50adf16f"} Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.046502 4482 scope.go:117] "RemoveContainer" containerID="cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.064070 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.072474 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.076723 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.095146 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.111320 4482 scope.go:117] "RemoveContainer" containerID="729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542" Nov 25 07:05:13 crc kubenswrapper[4482]: E1125 07:05:13.118870 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542\": container with ID starting with 729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542 not found: ID does not exist" containerID="729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.118898 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542"} err="failed to get container status \"729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542\": rpc error: code = NotFound desc = could not find container \"729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542\": container with ID starting with 729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542 not found: ID does not exist" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.118917 4482 scope.go:117] "RemoveContainer" containerID="cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.118977 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:13 crc kubenswrapper[4482]: E1125 07:05:13.119417 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="proxy-httpd" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119430 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="proxy-httpd" Nov 25 07:05:13 crc kubenswrapper[4482]: E1125 07:05:13.119449 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="sg-core" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119457 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="sg-core" Nov 25 07:05:13 crc kubenswrapper[4482]: E1125 07:05:13.119466 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791a624e-c2f0-46e9-aec4-9d93db804972" containerName="nova-metadata-metadata" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119474 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="791a624e-c2f0-46e9-aec4-9d93db804972" containerName="nova-metadata-metadata" Nov 25 07:05:13 crc kubenswrapper[4482]: E1125 07:05:13.119483 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="ceilometer-central-agent" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119489 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="ceilometer-central-agent" Nov 25 07:05:13 crc kubenswrapper[4482]: E1125 07:05:13.119512 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="ceilometer-notification-agent" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119517 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="ceilometer-notification-agent" Nov 25 07:05:13 crc kubenswrapper[4482]: E1125 07:05:13.119528 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bf8ee9-d19f-43ab-8262-79538e4d1422" containerName="nova-api-api" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119533 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bf8ee9-d19f-43ab-8262-79538e4d1422" containerName="nova-api-api" Nov 25 07:05:13 crc kubenswrapper[4482]: E1125 07:05:13.119549 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791a624e-c2f0-46e9-aec4-9d93db804972" containerName="nova-metadata-log" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119554 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="791a624e-c2f0-46e9-aec4-9d93db804972" containerName="nova-metadata-log" Nov 25 07:05:13 crc kubenswrapper[4482]: E1125 07:05:13.119568 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bf8ee9-d19f-43ab-8262-79538e4d1422" containerName="nova-api-log" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119581 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bf8ee9-d19f-43ab-8262-79538e4d1422" containerName="nova-api-log" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119755 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="791a624e-c2f0-46e9-aec4-9d93db804972" containerName="nova-metadata-log" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119767 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="sg-core" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119777 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="proxy-httpd" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119783 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="ceilometer-notification-agent" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119794 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bf8ee9-d19f-43ab-8262-79538e4d1422" containerName="nova-api-log" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119805 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bf8ee9-d19f-43ab-8262-79538e4d1422" containerName="nova-api-api" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119813 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="923dd3f7-190f-4715-a057-3eb83c260918" containerName="ceilometer-central-agent" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.119821 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="791a624e-c2f0-46e9-aec4-9d93db804972" containerName="nova-metadata-metadata" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.120871 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.125079 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.125279 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 07:05:13 crc kubenswrapper[4482]: E1125 07:05:13.129284 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519\": container with ID starting with cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519 not found: ID does not exist" containerID="cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.129317 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519"} err="failed to get container status \"cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519\": rpc error: code = NotFound desc = could not find container \"cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519\": container with ID starting with cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519 not found: ID does not exist" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.129334 4482 scope.go:117] "RemoveContainer" containerID="729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.131600 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542"} err="failed to get container status \"729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542\": rpc error: code = NotFound desc = could not find container \"729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542\": container with ID starting with 729c43575092828a97167e2664eb80fca4f517ed71552dd68a4b9aba56d95542 not found: ID does not exist" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.131618 4482 scope.go:117] "RemoveContainer" containerID="cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.138253 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519"} err="failed to get container status \"cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519\": rpc error: code = NotFound desc = could not find container \"cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519\": container with ID starting with cccc4159e9e5f2f8dd06e5e88ca79ffe3ce2c9263f24a43c1942a55bbfce7519 not found: ID does not exist" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.138306 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.231613 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bf8ee9-d19f-43ab-8262-79538e4d1422-combined-ca-bundle\") pod \"39bf8ee9-d19f-43ab-8262-79538e4d1422\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.231841 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bf8ee9-d19f-43ab-8262-79538e4d1422-config-data\") pod \"39bf8ee9-d19f-43ab-8262-79538e4d1422\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.231864 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bf8ee9-d19f-43ab-8262-79538e4d1422-logs\") pod \"39bf8ee9-d19f-43ab-8262-79538e4d1422\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.231884 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdz99\" (UniqueName: \"kubernetes.io/projected/923dd3f7-190f-4715-a057-3eb83c260918-kube-api-access-jdz99\") pod \"923dd3f7-190f-4715-a057-3eb83c260918\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.231904 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923dd3f7-190f-4715-a057-3eb83c260918-run-httpd\") pod \"923dd3f7-190f-4715-a057-3eb83c260918\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.232005 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-config-data\") pod \"923dd3f7-190f-4715-a057-3eb83c260918\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.232039 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-sg-core-conf-yaml\") pod \"923dd3f7-190f-4715-a057-3eb83c260918\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.232058 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923dd3f7-190f-4715-a057-3eb83c260918-log-httpd\") pod \"923dd3f7-190f-4715-a057-3eb83c260918\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.232095 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-scripts\") pod \"923dd3f7-190f-4715-a057-3eb83c260918\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.232111 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpspz\" (UniqueName: \"kubernetes.io/projected/39bf8ee9-d19f-43ab-8262-79538e4d1422-kube-api-access-wpspz\") pod \"39bf8ee9-d19f-43ab-8262-79538e4d1422\" (UID: \"39bf8ee9-d19f-43ab-8262-79538e4d1422\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.232147 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-combined-ca-bundle\") pod \"923dd3f7-190f-4715-a057-3eb83c260918\" (UID: \"923dd3f7-190f-4715-a057-3eb83c260918\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.232462 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.232507 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-config-data\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.232579 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfr4g\" (UniqueName: \"kubernetes.io/projected/afed4167-c22e-402c-9fc3-89eb3b1f22ee-kube-api-access-cfr4g\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.232600 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.232616 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afed4167-c22e-402c-9fc3-89eb3b1f22ee-logs\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.233725 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/923dd3f7-190f-4715-a057-3eb83c260918-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "923dd3f7-190f-4715-a057-3eb83c260918" (UID: "923dd3f7-190f-4715-a057-3eb83c260918"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.243542 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/923dd3f7-190f-4715-a057-3eb83c260918-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "923dd3f7-190f-4715-a057-3eb83c260918" (UID: "923dd3f7-190f-4715-a057-3eb83c260918"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.243696 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39bf8ee9-d19f-43ab-8262-79538e4d1422-logs" (OuterVolumeSpecName: "logs") pod "39bf8ee9-d19f-43ab-8262-79538e4d1422" (UID: "39bf8ee9-d19f-43ab-8262-79538e4d1422"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.249321 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-scripts" (OuterVolumeSpecName: "scripts") pod "923dd3f7-190f-4715-a057-3eb83c260918" (UID: "923dd3f7-190f-4715-a057-3eb83c260918"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.277380 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/923dd3f7-190f-4715-a057-3eb83c260918-kube-api-access-jdz99" (OuterVolumeSpecName: "kube-api-access-jdz99") pod "923dd3f7-190f-4715-a057-3eb83c260918" (UID: "923dd3f7-190f-4715-a057-3eb83c260918"). InnerVolumeSpecName "kube-api-access-jdz99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.306344 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bf8ee9-d19f-43ab-8262-79538e4d1422-kube-api-access-wpspz" (OuterVolumeSpecName: "kube-api-access-wpspz") pod "39bf8ee9-d19f-43ab-8262-79538e4d1422" (UID: "39bf8ee9-d19f-43ab-8262-79538e4d1422"). InnerVolumeSpecName "kube-api-access-wpspz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.336508 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.336595 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-config-data\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.336682 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfr4g\" (UniqueName: \"kubernetes.io/projected/afed4167-c22e-402c-9fc3-89eb3b1f22ee-kube-api-access-cfr4g\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.336706 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.336720 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afed4167-c22e-402c-9fc3-89eb3b1f22ee-logs\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.336798 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bf8ee9-d19f-43ab-8262-79538e4d1422-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.336810 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdz99\" (UniqueName: \"kubernetes.io/projected/923dd3f7-190f-4715-a057-3eb83c260918-kube-api-access-jdz99\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.336818 4482 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923dd3f7-190f-4715-a057-3eb83c260918-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.336830 4482 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923dd3f7-190f-4715-a057-3eb83c260918-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.336838 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.336846 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpspz\" (UniqueName: \"kubernetes.io/projected/39bf8ee9-d19f-43ab-8262-79538e4d1422-kube-api-access-wpspz\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.337217 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afed4167-c22e-402c-9fc3-89eb3b1f22ee-logs\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.368903 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.368905 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-config-data\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.370490 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.385356 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfr4g\" (UniqueName: \"kubernetes.io/projected/afed4167-c22e-402c-9fc3-89eb3b1f22ee-kube-api-access-cfr4g\") pod \"nova-metadata-0\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.389524 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "923dd3f7-190f-4715-a057-3eb83c260918" (UID: "923dd3f7-190f-4715-a057-3eb83c260918"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.400603 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39bf8ee9-d19f-43ab-8262-79538e4d1422-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39bf8ee9-d19f-43ab-8262-79538e4d1422" (UID: "39bf8ee9-d19f-43ab-8262-79538e4d1422"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.439064 4482 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.439090 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bf8ee9-d19f-43ab-8262-79538e4d1422-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.439971 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.466323 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.467291 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39bf8ee9-d19f-43ab-8262-79538e4d1422-config-data" (OuterVolumeSpecName: "config-data") pod "39bf8ee9-d19f-43ab-8262-79538e4d1422" (UID: "39bf8ee9-d19f-43ab-8262-79538e4d1422"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.544268 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-config\") pod \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.544567 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-ovsdbserver-sb\") pod \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.544618 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-dns-swift-storage-0\") pod \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.544724 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-dns-svc\") pod \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.544988 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-ovsdbserver-nb\") pod \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.545053 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h88tt\" (UniqueName: \"kubernetes.io/projected/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-kube-api-access-h88tt\") pod \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\" (UID: \"9189dc29-1a63-4e21-b4c6-066c86c6a7ab\") " Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.545553 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bf8ee9-d19f-43ab-8262-79538e4d1422-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.556289 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-config-data" (OuterVolumeSpecName: "config-data") pod "923dd3f7-190f-4715-a057-3eb83c260918" (UID: "923dd3f7-190f-4715-a057-3eb83c260918"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.562839 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-kube-api-access-h88tt" (OuterVolumeSpecName: "kube-api-access-h88tt") pod "9189dc29-1a63-4e21-b4c6-066c86c6a7ab" (UID: "9189dc29-1a63-4e21-b4c6-066c86c6a7ab"). InnerVolumeSpecName "kube-api-access-h88tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.607016 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "923dd3f7-190f-4715-a057-3eb83c260918" (UID: "923dd3f7-190f-4715-a057-3eb83c260918"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.651393 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.651423 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h88tt\" (UniqueName: \"kubernetes.io/projected/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-kube-api-access-h88tt\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.651436 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923dd3f7-190f-4715-a057-3eb83c260918-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.653765 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9189dc29-1a63-4e21-b4c6-066c86c6a7ab" (UID: "9189dc29-1a63-4e21-b4c6-066c86c6a7ab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.663224 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-config" (OuterVolumeSpecName: "config") pod "9189dc29-1a63-4e21-b4c6-066c86c6a7ab" (UID: "9189dc29-1a63-4e21-b4c6-066c86c6a7ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.674583 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9189dc29-1a63-4e21-b4c6-066c86c6a7ab" (UID: "9189dc29-1a63-4e21-b4c6-066c86c6a7ab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.681624 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9189dc29-1a63-4e21-b4c6-066c86c6a7ab" (UID: "9189dc29-1a63-4e21-b4c6-066c86c6a7ab"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.681639 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9189dc29-1a63-4e21-b4c6-066c86c6a7ab" (UID: "9189dc29-1a63-4e21-b4c6-066c86c6a7ab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.753630 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.753662 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.753671 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.753680 4482 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.753690 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9189dc29-1a63-4e21-b4c6-066c86c6a7ab-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:13 crc kubenswrapper[4482]: I1125 07:05:13.858854 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="791a624e-c2f0-46e9-aec4-9d93db804972" path="/var/lib/kubelet/pods/791a624e-c2f0-46e9-aec4-9d93db804972/volumes" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.023510 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"39bf8ee9-d19f-43ab-8262-79538e4d1422","Type":"ContainerDied","Data":"3dda7a887d28883769c372e184a742da77d2821ca1d9081930ebf900fc80f897"} Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.023575 4482 scope.go:117] "RemoveContainer" containerID="0198bfe5c9119d7785a1bd39e8a46a89736c4cdc77ba3a72e716795107158c80" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.023780 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.093709 4482 generic.go:334] "Generic (PLEG): container finished" podID="cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d" containerID="dc337e694aff42f5f1e50941d1fc9763e0bb538c31efd27659ef20f62153f7e9" exitCode=0 Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.093804 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zwzh2" event={"ID":"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d","Type":"ContainerDied","Data":"dc337e694aff42f5f1e50941d1fc9763e0bb538c31efd27659ef20f62153f7e9"} Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.115850 4482 scope.go:117] "RemoveContainer" containerID="73e383b11f59d9437e5be6fb7f2c2d1124ac2d74ea9deee4887302df50adf16f" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.116041 4482 generic.go:334] "Generic (PLEG): container finished" podID="9189dc29-1a63-4e21-b4c6-066c86c6a7ab" containerID="2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2" exitCode=0 Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.116061 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" event={"ID":"9189dc29-1a63-4e21-b4c6-066c86c6a7ab","Type":"ContainerDied","Data":"2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2"} Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.116905 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" event={"ID":"9189dc29-1a63-4e21-b4c6-066c86c6a7ab","Type":"ContainerDied","Data":"87c17de301e6b65a27895eb4ff840273e3789f6d59321b45fc27a110aff16185"} Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.116154 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84dbcdd9df-95cth" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.163987 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.164192 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6656629b-3105-4bc0-a292-aa2fa6df9723" containerName="kube-state-metrics" containerID="cri-o://85a16ebfb6df2f637a5e283ed484cdd129cd1ea8cbf04733f93cff14a64abd8b" gracePeriod=30 Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.201572 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.204040 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.208893 4482 scope.go:117] "RemoveContainer" containerID="2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.209214 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.209755 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923dd3f7-190f-4715-a057-3eb83c260918","Type":"ContainerDied","Data":"2c8609885d8ab2e22021093b9ff4211ccc65a987c21a386aa90e7ceec5a2a268"} Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.229322 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.234221 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:14 crc kubenswrapper[4482]: E1125 07:05:14.234839 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9189dc29-1a63-4e21-b4c6-066c86c6a7ab" containerName="dnsmasq-dns" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.234859 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9189dc29-1a63-4e21-b4c6-066c86c6a7ab" containerName="dnsmasq-dns" Nov 25 07:05:14 crc kubenswrapper[4482]: E1125 07:05:14.234870 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9189dc29-1a63-4e21-b4c6-066c86c6a7ab" containerName="init" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.234875 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9189dc29-1a63-4e21-b4c6-066c86c6a7ab" containerName="init" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.235106 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="9189dc29-1a63-4e21-b4c6-066c86c6a7ab" containerName="dnsmasq-dns" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.236231 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.238701 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.251884 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.274376 4482 scope.go:117] "RemoveContainer" containerID="d814388c59fd2296da5b79d661f2fb91c99baac0ddccdce6ea7519f22c8fa728" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.286231 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84dbcdd9df-95cth"] Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.292676 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84dbcdd9df-95cth"] Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.302187 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.305458 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798ca689-d69d-488c-b333-f5097a1a2368-config-data\") pod \"nova-api-0\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.305558 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/798ca689-d69d-488c-b333-f5097a1a2368-logs\") pod \"nova-api-0\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.305781 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798ca689-d69d-488c-b333-f5097a1a2368-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.305873 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsfnw\" (UniqueName: \"kubernetes.io/projected/798ca689-d69d-488c-b333-f5097a1a2368-kube-api-access-vsfnw\") pod \"nova-api-0\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.313209 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.318997 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.321262 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.323312 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.326203 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.361225 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.396343 4482 scope.go:117] "RemoveContainer" containerID="2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2" Nov 25 07:05:14 crc kubenswrapper[4482]: E1125 07:05:14.404155 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2\": container with ID starting with 2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2 not found: ID does not exist" containerID="2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.404209 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2"} err="failed to get container status \"2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2\": rpc error: code = NotFound desc = could not find container \"2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2\": container with ID starting with 2b7cf784913e44d4f524680ae537f8d6f3bf8195b7ff2f2af16085ac5c04e0f2 not found: ID does not exist" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.404233 4482 scope.go:117] "RemoveContainer" containerID="d814388c59fd2296da5b79d661f2fb91c99baac0ddccdce6ea7519f22c8fa728" Nov 25 07:05:14 crc kubenswrapper[4482]: E1125 07:05:14.404621 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d814388c59fd2296da5b79d661f2fb91c99baac0ddccdce6ea7519f22c8fa728\": container with ID starting with d814388c59fd2296da5b79d661f2fb91c99baac0ddccdce6ea7519f22c8fa728 not found: ID does not exist" containerID="d814388c59fd2296da5b79d661f2fb91c99baac0ddccdce6ea7519f22c8fa728" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.404687 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d814388c59fd2296da5b79d661f2fb91c99baac0ddccdce6ea7519f22c8fa728"} err="failed to get container status \"d814388c59fd2296da5b79d661f2fb91c99baac0ddccdce6ea7519f22c8fa728\": rpc error: code = NotFound desc = could not find container \"d814388c59fd2296da5b79d661f2fb91c99baac0ddccdce6ea7519f22c8fa728\": container with ID starting with d814388c59fd2296da5b79d661f2fb91c99baac0ddccdce6ea7519f22c8fa728 not found: ID does not exist" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.404716 4482 scope.go:117] "RemoveContainer" containerID="138e7b3fc78c7397997119aaff6facabe368ec544e7104fe981d97473c78da72" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.409014 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.409038 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.409075 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/798ca689-d69d-488c-b333-f5097a1a2368-logs\") pod \"nova-api-0\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.409097 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e7d3714-955e-451b-a10b-7a685d9484f1-run-httpd\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.409112 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-config-data\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.409150 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-scripts\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.409192 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7fb\" (UniqueName: \"kubernetes.io/projected/2e7d3714-955e-451b-a10b-7a685d9484f1-kube-api-access-4k7fb\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.409212 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798ca689-d69d-488c-b333-f5097a1a2368-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.409239 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsfnw\" (UniqueName: \"kubernetes.io/projected/798ca689-d69d-488c-b333-f5097a1a2368-kube-api-access-vsfnw\") pod \"nova-api-0\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.409261 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e7d3714-955e-451b-a10b-7a685d9484f1-log-httpd\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.409304 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798ca689-d69d-488c-b333-f5097a1a2368-config-data\") pod \"nova-api-0\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.410843 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/798ca689-d69d-488c-b333-f5097a1a2368-logs\") pod \"nova-api-0\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.415779 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798ca689-d69d-488c-b333-f5097a1a2368-config-data\") pod \"nova-api-0\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.419164 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798ca689-d69d-488c-b333-f5097a1a2368-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.440357 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsfnw\" (UniqueName: \"kubernetes.io/projected/798ca689-d69d-488c-b333-f5097a1a2368-kube-api-access-vsfnw\") pod \"nova-api-0\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.466317 4482 scope.go:117] "RemoveContainer" containerID="8ba44be81aca99bb30c5ed8b31eb8609112c090f0d1d0fe91c2b6c395d0ee672" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.499046 4482 scope.go:117] "RemoveContainer" containerID="862b576c7d68825f91daaa8384fd3fd1f4032f205a1608bcd6f78f293b8d4c23" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.511328 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.511655 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.511845 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e7d3714-955e-451b-a10b-7a685d9484f1-run-httpd\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.511936 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-config-data\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.512059 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-scripts\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.512159 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k7fb\" (UniqueName: \"kubernetes.io/projected/2e7d3714-955e-451b-a10b-7a685d9484f1-kube-api-access-4k7fb\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.512385 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e7d3714-955e-451b-a10b-7a685d9484f1-run-httpd\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.512389 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e7d3714-955e-451b-a10b-7a685d9484f1-log-httpd\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.514291 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e7d3714-955e-451b-a10b-7a685d9484f1-log-httpd\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.520989 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-scripts\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.521016 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-config-data\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.522332 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.524009 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.532079 4482 scope.go:117] "RemoveContainer" containerID="cc47653245d4c8b1f9dab090cfd50b473a9a2fbfab4c880d9f8c960e5b7e5530" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.537186 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k7fb\" (UniqueName: \"kubernetes.io/projected/2e7d3714-955e-451b-a10b-7a685d9484f1-kube-api-access-4k7fb\") pod \"ceilometer-0\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " pod="openstack/ceilometer-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.554123 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:14 crc kubenswrapper[4482]: I1125 07:05:14.726413 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.016509 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.118573 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.306541 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"798ca689-d69d-488c-b333-f5097a1a2368","Type":"ContainerStarted","Data":"eb23cb6ba67c1cd5ccbce72422fad02451c6f8711d9111431ac4f65b12c75c55"} Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.344809 4482 generic.go:334] "Generic (PLEG): container finished" podID="6656629b-3105-4bc0-a292-aa2fa6df9723" containerID="85a16ebfb6df2f637a5e283ed484cdd129cd1ea8cbf04733f93cff14a64abd8b" exitCode=2 Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.344901 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6656629b-3105-4bc0-a292-aa2fa6df9723","Type":"ContainerDied","Data":"85a16ebfb6df2f637a5e283ed484cdd129cd1ea8cbf04733f93cff14a64abd8b"} Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.366051 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"afed4167-c22e-402c-9fc3-89eb3b1f22ee","Type":"ContainerStarted","Data":"f5671acc2b929558cce99f7c05e2307da248cbe8da3d11adf75bce8c94723e7b"} Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.366088 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"afed4167-c22e-402c-9fc3-89eb3b1f22ee","Type":"ContainerStarted","Data":"0467136d9dcd7c16d6f9693799dbc3ad5044cdc518e62251a090f4263c387451"} Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.366098 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"afed4167-c22e-402c-9fc3-89eb3b1f22ee","Type":"ContainerStarted","Data":"7697dddae680d516a231da3c142bc5f9313202c88ed91c32971aba1f2a948a61"} Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.378211 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.390207 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.390192921 podStartE2EDuration="2.390192921s" podCreationTimestamp="2025-11-25 07:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:15.384977729 +0000 UTC m=+1089.873208988" watchObservedRunningTime="2025-11-25 07:05:15.390192921 +0000 UTC m=+1089.878424180" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.420841 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.454468 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t44qq\" (UniqueName: \"kubernetes.io/projected/6656629b-3105-4bc0-a292-aa2fa6df9723-kube-api-access-t44qq\") pod \"6656629b-3105-4bc0-a292-aa2fa6df9723\" (UID: \"6656629b-3105-4bc0-a292-aa2fa6df9723\") " Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.475326 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6656629b-3105-4bc0-a292-aa2fa6df9723-kube-api-access-t44qq" (OuterVolumeSpecName: "kube-api-access-t44qq") pod "6656629b-3105-4bc0-a292-aa2fa6df9723" (UID: "6656629b-3105-4bc0-a292-aa2fa6df9723"). InnerVolumeSpecName "kube-api-access-t44qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:15 crc kubenswrapper[4482]: W1125 07:05:15.501156 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e7d3714_955e_451b_a10b_7a685d9484f1.slice/crio-3832f657866b51da7d7aefc7736dc3764c4b2c7b1d98f97cda8aab703070e039 WatchSource:0}: Error finding container 3832f657866b51da7d7aefc7736dc3764c4b2c7b1d98f97cda8aab703070e039: Status 404 returned error can't find the container with id 3832f657866b51da7d7aefc7736dc3764c4b2c7b1d98f97cda8aab703070e039 Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.516059 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.558605 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t44qq\" (UniqueName: \"kubernetes.io/projected/6656629b-3105-4bc0-a292-aa2fa6df9723-kube-api-access-t44qq\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.752040 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.866986 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39bf8ee9-d19f-43ab-8262-79538e4d1422" path="/var/lib/kubelet/pods/39bf8ee9-d19f-43ab-8262-79538e4d1422/volumes" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.867850 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9189dc29-1a63-4e21-b4c6-066c86c6a7ab" path="/var/lib/kubelet/pods/9189dc29-1a63-4e21-b4c6-066c86c6a7ab/volumes" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.868603 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="923dd3f7-190f-4715-a057-3eb83c260918" path="/var/lib/kubelet/pods/923dd3f7-190f-4715-a057-3eb83c260918/volumes" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.898474 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5vpg\" (UniqueName: \"kubernetes.io/projected/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-kube-api-access-v5vpg\") pod \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.898714 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-combined-ca-bundle\") pod \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.898874 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-config-data\") pod \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.899251 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-scripts\") pod \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\" (UID: \"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d\") " Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.909953 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-scripts" (OuterVolumeSpecName: "scripts") pod "cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d" (UID: "cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.949886 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d" (UID: "cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.950868 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-kube-api-access-v5vpg" (OuterVolumeSpecName: "kube-api-access-v5vpg") pod "cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d" (UID: "cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d"). InnerVolumeSpecName "kube-api-access-v5vpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:15 crc kubenswrapper[4482]: I1125 07:05:15.955372 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-config-data" (OuterVolumeSpecName: "config-data") pod "cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d" (UID: "cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.003446 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.003666 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5vpg\" (UniqueName: \"kubernetes.io/projected/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-kube-api-access-v5vpg\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.003764 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.004088 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.293027 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 07:05:16 crc kubenswrapper[4482]: E1125 07:05:16.293675 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d" containerName="nova-cell1-conductor-db-sync" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.293689 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d" containerName="nova-cell1-conductor-db-sync" Nov 25 07:05:16 crc kubenswrapper[4482]: E1125 07:05:16.293708 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6656629b-3105-4bc0-a292-aa2fa6df9723" containerName="kube-state-metrics" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.293714 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="6656629b-3105-4bc0-a292-aa2fa6df9723" containerName="kube-state-metrics" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.293931 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d" containerName="nova-cell1-conductor-db-sync" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.293947 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="6656629b-3105-4bc0-a292-aa2fa6df9723" containerName="kube-state-metrics" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.294605 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.310359 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.398895 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zwzh2" event={"ID":"cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d","Type":"ContainerDied","Data":"1d227f12f715ada56621acaf235639a1352f5447a7ee5405cc91d61b86de71be"} Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.398937 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d227f12f715ada56621acaf235639a1352f5447a7ee5405cc91d61b86de71be" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.399059 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zwzh2" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.413547 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6656629b-3105-4bc0-a292-aa2fa6df9723","Type":"ContainerDied","Data":"dcd4a5874d2490f3f1e953f623bc7bab9fa65e85f9b15c922d9161dd4ddc03e1"} Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.413580 4482 scope.go:117] "RemoveContainer" containerID="85a16ebfb6df2f637a5e283ed484cdd129cd1ea8cbf04733f93cff14a64abd8b" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.413716 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.413936 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b32c22-7167-4f45-9a4e-516890bd9913-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"33b32c22-7167-4f45-9a4e-516890bd9913\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.413973 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b32c22-7167-4f45-9a4e-516890bd9913-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"33b32c22-7167-4f45-9a4e-516890bd9913\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.413993 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k62j\" (UniqueName: \"kubernetes.io/projected/33b32c22-7167-4f45-9a4e-516890bd9913-kube-api-access-2k62j\") pod \"nova-cell1-conductor-0\" (UID: \"33b32c22-7167-4f45-9a4e-516890bd9913\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.426622 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"798ca689-d69d-488c-b333-f5097a1a2368","Type":"ContainerStarted","Data":"679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272"} Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.426650 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"798ca689-d69d-488c-b333-f5097a1a2368","Type":"ContainerStarted","Data":"10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9"} Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.434692 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e7d3714-955e-451b-a10b-7a685d9484f1","Type":"ContainerStarted","Data":"3832f657866b51da7d7aefc7736dc3764c4b2c7b1d98f97cda8aab703070e039"} Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.454548 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.504100 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.516993 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b32c22-7167-4f45-9a4e-516890bd9913-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"33b32c22-7167-4f45-9a4e-516890bd9913\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.517068 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b32c22-7167-4f45-9a4e-516890bd9913-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"33b32c22-7167-4f45-9a4e-516890bd9913\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.517086 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k62j\" (UniqueName: \"kubernetes.io/projected/33b32c22-7167-4f45-9a4e-516890bd9913-kube-api-access-2k62j\") pod \"nova-cell1-conductor-0\" (UID: \"33b32c22-7167-4f45-9a4e-516890bd9913\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.522033 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.524163 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.526872 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.526854225 podStartE2EDuration="2.526854225s" podCreationTimestamp="2025-11-25 07:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:16.453735233 +0000 UTC m=+1090.941966493" watchObservedRunningTime="2025-11-25 07:05:16.526854225 +0000 UTC m=+1091.015085484" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.535107 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b32c22-7167-4f45-9a4e-516890bd9913-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"33b32c22-7167-4f45-9a4e-516890bd9913\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.535924 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.536534 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.539667 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b32c22-7167-4f45-9a4e-516890bd9913-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"33b32c22-7167-4f45-9a4e-516890bd9913\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.542978 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k62j\" (UniqueName: \"kubernetes.io/projected/33b32c22-7167-4f45-9a4e-516890bd9913-kube-api-access-2k62j\") pod \"nova-cell1-conductor-0\" (UID: \"33b32c22-7167-4f45-9a4e-516890bd9913\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.552603 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.622705 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1a79608b-f242-45d3-aa13-73c0d7bfd626-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1a79608b-f242-45d3-aa13-73c0d7bfd626\") " pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.622894 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a79608b-f242-45d3-aa13-73c0d7bfd626-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1a79608b-f242-45d3-aa13-73c0d7bfd626\") " pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.623199 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89cvz\" (UniqueName: \"kubernetes.io/projected/1a79608b-f242-45d3-aa13-73c0d7bfd626-kube-api-access-89cvz\") pod \"kube-state-metrics-0\" (UID: \"1a79608b-f242-45d3-aa13-73c0d7bfd626\") " pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.623232 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a79608b-f242-45d3-aa13-73c0d7bfd626-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1a79608b-f242-45d3-aa13-73c0d7bfd626\") " pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.650554 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.725616 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1a79608b-f242-45d3-aa13-73c0d7bfd626-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1a79608b-f242-45d3-aa13-73c0d7bfd626\") " pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.725669 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a79608b-f242-45d3-aa13-73c0d7bfd626-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1a79608b-f242-45d3-aa13-73c0d7bfd626\") " pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.725823 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89cvz\" (UniqueName: \"kubernetes.io/projected/1a79608b-f242-45d3-aa13-73c0d7bfd626-kube-api-access-89cvz\") pod \"kube-state-metrics-0\" (UID: \"1a79608b-f242-45d3-aa13-73c0d7bfd626\") " pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.725854 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a79608b-f242-45d3-aa13-73c0d7bfd626-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1a79608b-f242-45d3-aa13-73c0d7bfd626\") " pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.731016 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a79608b-f242-45d3-aa13-73c0d7bfd626-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1a79608b-f242-45d3-aa13-73c0d7bfd626\") " pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.731121 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a79608b-f242-45d3-aa13-73c0d7bfd626-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1a79608b-f242-45d3-aa13-73c0d7bfd626\") " pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.731411 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1a79608b-f242-45d3-aa13-73c0d7bfd626-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1a79608b-f242-45d3-aa13-73c0d7bfd626\") " pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.740541 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89cvz\" (UniqueName: \"kubernetes.io/projected/1a79608b-f242-45d3-aa13-73c0d7bfd626-kube-api-access-89cvz\") pod \"kube-state-metrics-0\" (UID: \"1a79608b-f242-45d3-aa13-73c0d7bfd626\") " pod="openstack/kube-state-metrics-0" Nov 25 07:05:16 crc kubenswrapper[4482]: I1125 07:05:16.853589 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 07:05:17 crc kubenswrapper[4482]: I1125 07:05:17.006100 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 07:05:17 crc kubenswrapper[4482]: I1125 07:05:17.448060 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e7d3714-955e-451b-a10b-7a685d9484f1","Type":"ContainerStarted","Data":"353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00"} Nov 25 07:05:17 crc kubenswrapper[4482]: I1125 07:05:17.448354 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e7d3714-955e-451b-a10b-7a685d9484f1","Type":"ContainerStarted","Data":"fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b"} Nov 25 07:05:17 crc kubenswrapper[4482]: I1125 07:05:17.452576 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"33b32c22-7167-4f45-9a4e-516890bd9913","Type":"ContainerStarted","Data":"febb2bcf67672f512d24611906c68f0a2d8ade3be7993c6527ef52161eec22ed"} Nov 25 07:05:17 crc kubenswrapper[4482]: I1125 07:05:17.452626 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"33b32c22-7167-4f45-9a4e-516890bd9913","Type":"ContainerStarted","Data":"593cf4273b7099e998a5ff1537bf2d9d21ae15adbeee80fff9ad675b1bee0191"} Nov 25 07:05:17 crc kubenswrapper[4482]: I1125 07:05:17.452645 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:17 crc kubenswrapper[4482]: I1125 07:05:17.482948 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:05:17 crc kubenswrapper[4482]: W1125 07:05:17.491356 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a79608b_f242_45d3_aa13_73c0d7bfd626.slice/crio-8e9e428971f0ab2baa9c954b01765fddc7752222403a6df2186d5b40bbb21155 WatchSource:0}: Error finding container 8e9e428971f0ab2baa9c954b01765fddc7752222403a6df2186d5b40bbb21155: Status 404 returned error can't find the container with id 8e9e428971f0ab2baa9c954b01765fddc7752222403a6df2186d5b40bbb21155 Nov 25 07:05:17 crc kubenswrapper[4482]: I1125 07:05:17.491960 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.491941295 podStartE2EDuration="1.491941295s" podCreationTimestamp="2025-11-25 07:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:17.474645019 +0000 UTC m=+1091.962876277" watchObservedRunningTime="2025-11-25 07:05:17.491941295 +0000 UTC m=+1091.980172544" Nov 25 07:05:17 crc kubenswrapper[4482]: I1125 07:05:17.841651 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6656629b-3105-4bc0-a292-aa2fa6df9723" path="/var/lib/kubelet/pods/6656629b-3105-4bc0-a292-aa2fa6df9723/volumes" Nov 25 07:05:17 crc kubenswrapper[4482]: I1125 07:05:17.983802 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:18 crc kubenswrapper[4482]: I1125 07:05:18.466877 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 07:05:18 crc kubenswrapper[4482]: I1125 07:05:18.467293 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 07:05:18 crc kubenswrapper[4482]: I1125 07:05:18.471211 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1a79608b-f242-45d3-aa13-73c0d7bfd626","Type":"ContainerStarted","Data":"5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9"} Nov 25 07:05:18 crc kubenswrapper[4482]: I1125 07:05:18.471241 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1a79608b-f242-45d3-aa13-73c0d7bfd626","Type":"ContainerStarted","Data":"8e9e428971f0ab2baa9c954b01765fddc7752222403a6df2186d5b40bbb21155"} Nov 25 07:05:18 crc kubenswrapper[4482]: I1125 07:05:18.471376 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 07:05:18 crc kubenswrapper[4482]: I1125 07:05:18.473996 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e7d3714-955e-451b-a10b-7a685d9484f1","Type":"ContainerStarted","Data":"9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1"} Nov 25 07:05:19 crc kubenswrapper[4482]: I1125 07:05:19.482114 4482 generic.go:334] "Generic (PLEG): container finished" podID="1909a799-3429-4fe2-adca-d756ae0c7c59" containerID="9dfc79e9ca51e0b4abf83b05a54ac2273275d7193b81548cec98fdbf415d0864" exitCode=0 Nov 25 07:05:19 crc kubenswrapper[4482]: I1125 07:05:19.482370 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vddpr" event={"ID":"1909a799-3429-4fe2-adca-d756ae0c7c59","Type":"ContainerDied","Data":"9dfc79e9ca51e0b4abf83b05a54ac2273275d7193b81548cec98fdbf415d0864"} Nov 25 07:05:19 crc kubenswrapper[4482]: I1125 07:05:19.502308 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.189822195 podStartE2EDuration="3.50229183s" podCreationTimestamp="2025-11-25 07:05:16 +0000 UTC" firstStartedPulling="2025-11-25 07:05:17.493448977 +0000 UTC m=+1091.981680236" lastFinishedPulling="2025-11-25 07:05:17.805918612 +0000 UTC m=+1092.294149871" observedRunningTime="2025-11-25 07:05:18.492201502 +0000 UTC m=+1092.980432751" watchObservedRunningTime="2025-11-25 07:05:19.50229183 +0000 UTC m=+1093.990523089" Nov 25 07:05:20 crc kubenswrapper[4482]: I1125 07:05:20.879031 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:20 crc kubenswrapper[4482]: I1125 07:05:20.933199 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-combined-ca-bundle\") pod \"1909a799-3429-4fe2-adca-d756ae0c7c59\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " Nov 25 07:05:20 crc kubenswrapper[4482]: I1125 07:05:20.933405 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-scripts\") pod \"1909a799-3429-4fe2-adca-d756ae0c7c59\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " Nov 25 07:05:20 crc kubenswrapper[4482]: I1125 07:05:20.933514 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-config-data\") pod \"1909a799-3429-4fe2-adca-d756ae0c7c59\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " Nov 25 07:05:20 crc kubenswrapper[4482]: I1125 07:05:20.933657 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28tnr\" (UniqueName: \"kubernetes.io/projected/1909a799-3429-4fe2-adca-d756ae0c7c59-kube-api-access-28tnr\") pod \"1909a799-3429-4fe2-adca-d756ae0c7c59\" (UID: \"1909a799-3429-4fe2-adca-d756ae0c7c59\") " Nov 25 07:05:20 crc kubenswrapper[4482]: I1125 07:05:20.947050 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1909a799-3429-4fe2-adca-d756ae0c7c59-kube-api-access-28tnr" (OuterVolumeSpecName: "kube-api-access-28tnr") pod "1909a799-3429-4fe2-adca-d756ae0c7c59" (UID: "1909a799-3429-4fe2-adca-d756ae0c7c59"). InnerVolumeSpecName "kube-api-access-28tnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:20 crc kubenswrapper[4482]: I1125 07:05:20.950296 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-scripts" (OuterVolumeSpecName: "scripts") pod "1909a799-3429-4fe2-adca-d756ae0c7c59" (UID: "1909a799-3429-4fe2-adca-d756ae0c7c59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:20 crc kubenswrapper[4482]: I1125 07:05:20.969427 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-config-data" (OuterVolumeSpecName: "config-data") pod "1909a799-3429-4fe2-adca-d756ae0c7c59" (UID: "1909a799-3429-4fe2-adca-d756ae0c7c59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:20 crc kubenswrapper[4482]: I1125 07:05:20.980331 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1909a799-3429-4fe2-adca-d756ae0c7c59" (UID: "1909a799-3429-4fe2-adca-d756ae0c7c59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:21 crc kubenswrapper[4482]: I1125 07:05:21.036875 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:21 crc kubenswrapper[4482]: I1125 07:05:21.037086 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:21 crc kubenswrapper[4482]: I1125 07:05:21.037154 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28tnr\" (UniqueName: \"kubernetes.io/projected/1909a799-3429-4fe2-adca-d756ae0c7c59-kube-api-access-28tnr\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:21 crc kubenswrapper[4482]: I1125 07:05:21.037341 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1909a799-3429-4fe2-adca-d756ae0c7c59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:21 crc kubenswrapper[4482]: I1125 07:05:21.436236 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 25 07:05:21 crc kubenswrapper[4482]: I1125 07:05:21.506097 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vddpr" event={"ID":"1909a799-3429-4fe2-adca-d756ae0c7c59","Type":"ContainerDied","Data":"36e30b1036b27cdf886ce3050abc28aad81ea5e52842065513fe93f49c2a0094"} Nov 25 07:05:21 crc kubenswrapper[4482]: I1125 07:05:21.506162 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36e30b1036b27cdf886ce3050abc28aad81ea5e52842065513fe93f49c2a0094" Nov 25 07:05:21 crc kubenswrapper[4482]: I1125 07:05:21.506156 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vddpr" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.112664 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.113249 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="798ca689-d69d-488c-b333-f5097a1a2368" containerName="nova-api-log" containerID="cri-o://10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9" gracePeriod=30 Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.113794 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="798ca689-d69d-488c-b333-f5097a1a2368" containerName="nova-api-api" containerID="cri-o://679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272" gracePeriod=30 Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.166840 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.167071 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="afed4167-c22e-402c-9fc3-89eb3b1f22ee" containerName="nova-metadata-log" containerID="cri-o://0467136d9dcd7c16d6f9693799dbc3ad5044cdc518e62251a090f4263c387451" gracePeriod=30 Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.167216 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="afed4167-c22e-402c-9fc3-89eb3b1f22ee" containerName="nova-metadata-metadata" containerID="cri-o://f5671acc2b929558cce99f7c05e2307da248cbe8da3d11adf75bce8c94723e7b" gracePeriod=30 Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.570078 4482 generic.go:334] "Generic (PLEG): container finished" podID="afed4167-c22e-402c-9fc3-89eb3b1f22ee" containerID="f5671acc2b929558cce99f7c05e2307da248cbe8da3d11adf75bce8c94723e7b" exitCode=0 Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.570273 4482 generic.go:334] "Generic (PLEG): container finished" podID="afed4167-c22e-402c-9fc3-89eb3b1f22ee" containerID="0467136d9dcd7c16d6f9693799dbc3ad5044cdc518e62251a090f4263c387451" exitCode=143 Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.570314 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"afed4167-c22e-402c-9fc3-89eb3b1f22ee","Type":"ContainerDied","Data":"f5671acc2b929558cce99f7c05e2307da248cbe8da3d11adf75bce8c94723e7b"} Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.570403 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"afed4167-c22e-402c-9fc3-89eb3b1f22ee","Type":"ContainerDied","Data":"0467136d9dcd7c16d6f9693799dbc3ad5044cdc518e62251a090f4263c387451"} Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.594103 4482 generic.go:334] "Generic (PLEG): container finished" podID="798ca689-d69d-488c-b333-f5097a1a2368" containerID="10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9" exitCode=143 Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.594146 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"798ca689-d69d-488c-b333-f5097a1a2368","Type":"ContainerDied","Data":"10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9"} Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.619375 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e7d3714-955e-451b-a10b-7a685d9484f1","Type":"ContainerStarted","Data":"8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e"} Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.619561 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="ceilometer-central-agent" containerID="cri-o://fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b" gracePeriod=30 Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.619872 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.620125 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="proxy-httpd" containerID="cri-o://8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e" gracePeriod=30 Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.620211 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="sg-core" containerID="cri-o://9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1" gracePeriod=30 Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.620248 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="ceilometer-notification-agent" containerID="cri-o://353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00" gracePeriod=30 Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.674665 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.473885289 podStartE2EDuration="8.674648789s" podCreationTimestamp="2025-11-25 07:05:14 +0000 UTC" firstStartedPulling="2025-11-25 07:05:15.515138021 +0000 UTC m=+1090.003369280" lastFinishedPulling="2025-11-25 07:05:21.715901521 +0000 UTC m=+1096.204132780" observedRunningTime="2025-11-25 07:05:22.67172671 +0000 UTC m=+1097.159957970" watchObservedRunningTime="2025-11-25 07:05:22.674648789 +0000 UTC m=+1097.162880049" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.726163 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.785918 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afed4167-c22e-402c-9fc3-89eb3b1f22ee-logs\") pod \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.785998 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfr4g\" (UniqueName: \"kubernetes.io/projected/afed4167-c22e-402c-9fc3-89eb3b1f22ee-kube-api-access-cfr4g\") pod \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.786481 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afed4167-c22e-402c-9fc3-89eb3b1f22ee-logs" (OuterVolumeSpecName: "logs") pod "afed4167-c22e-402c-9fc3-89eb3b1f22ee" (UID: "afed4167-c22e-402c-9fc3-89eb3b1f22ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.786727 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-combined-ca-bundle\") pod \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.786783 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-config-data\") pod \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.786847 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-nova-metadata-tls-certs\") pod \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\" (UID: \"afed4167-c22e-402c-9fc3-89eb3b1f22ee\") " Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.801382 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afed4167-c22e-402c-9fc3-89eb3b1f22ee-kube-api-access-cfr4g" (OuterVolumeSpecName: "kube-api-access-cfr4g") pod "afed4167-c22e-402c-9fc3-89eb3b1f22ee" (UID: "afed4167-c22e-402c-9fc3-89eb3b1f22ee"). InnerVolumeSpecName "kube-api-access-cfr4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.866895 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-config-data" (OuterVolumeSpecName: "config-data") pod "afed4167-c22e-402c-9fc3-89eb3b1f22ee" (UID: "afed4167-c22e-402c-9fc3-89eb3b1f22ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.884308 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afed4167-c22e-402c-9fc3-89eb3b1f22ee" (UID: "afed4167-c22e-402c-9fc3-89eb3b1f22ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.890946 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.890983 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.890995 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afed4167-c22e-402c-9fc3-89eb3b1f22ee-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.891005 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfr4g\" (UniqueName: \"kubernetes.io/projected/afed4167-c22e-402c-9fc3-89eb3b1f22ee-kube-api-access-cfr4g\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.902455 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "afed4167-c22e-402c-9fc3-89eb3b1f22ee" (UID: "afed4167-c22e-402c-9fc3-89eb3b1f22ee"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:22 crc kubenswrapper[4482]: I1125 07:05:22.992924 4482 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/afed4167-c22e-402c-9fc3-89eb3b1f22ee-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.185529 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.301969 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798ca689-d69d-488c-b333-f5097a1a2368-config-data\") pod \"798ca689-d69d-488c-b333-f5097a1a2368\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.302141 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/798ca689-d69d-488c-b333-f5097a1a2368-logs\") pod \"798ca689-d69d-488c-b333-f5097a1a2368\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.302336 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798ca689-d69d-488c-b333-f5097a1a2368-combined-ca-bundle\") pod \"798ca689-d69d-488c-b333-f5097a1a2368\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.302643 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsfnw\" (UniqueName: \"kubernetes.io/projected/798ca689-d69d-488c-b333-f5097a1a2368-kube-api-access-vsfnw\") pod \"798ca689-d69d-488c-b333-f5097a1a2368\" (UID: \"798ca689-d69d-488c-b333-f5097a1a2368\") " Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.302909 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/798ca689-d69d-488c-b333-f5097a1a2368-logs" (OuterVolumeSpecName: "logs") pod "798ca689-d69d-488c-b333-f5097a1a2368" (UID: "798ca689-d69d-488c-b333-f5097a1a2368"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.303570 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/798ca689-d69d-488c-b333-f5097a1a2368-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.307902 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798ca689-d69d-488c-b333-f5097a1a2368-kube-api-access-vsfnw" (OuterVolumeSpecName: "kube-api-access-vsfnw") pod "798ca689-d69d-488c-b333-f5097a1a2368" (UID: "798ca689-d69d-488c-b333-f5097a1a2368"). InnerVolumeSpecName "kube-api-access-vsfnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.333962 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798ca689-d69d-488c-b333-f5097a1a2368-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "798ca689-d69d-488c-b333-f5097a1a2368" (UID: "798ca689-d69d-488c-b333-f5097a1a2368"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.334085 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798ca689-d69d-488c-b333-f5097a1a2368-config-data" (OuterVolumeSpecName: "config-data") pod "798ca689-d69d-488c-b333-f5097a1a2368" (UID: "798ca689-d69d-488c-b333-f5097a1a2368"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.405079 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798ca689-d69d-488c-b333-f5097a1a2368-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.405106 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798ca689-d69d-488c-b333-f5097a1a2368-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.405119 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsfnw\" (UniqueName: \"kubernetes.io/projected/798ca689-d69d-488c-b333-f5097a1a2368-kube-api-access-vsfnw\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.629857 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"afed4167-c22e-402c-9fc3-89eb3b1f22ee","Type":"ContainerDied","Data":"7697dddae680d516a231da3c142bc5f9313202c88ed91c32971aba1f2a948a61"} Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.629904 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.629932 4482 scope.go:117] "RemoveContainer" containerID="f5671acc2b929558cce99f7c05e2307da248cbe8da3d11adf75bce8c94723e7b" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.638739 4482 generic.go:334] "Generic (PLEG): container finished" podID="798ca689-d69d-488c-b333-f5097a1a2368" containerID="679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272" exitCode=0 Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.638813 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.638817 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"798ca689-d69d-488c-b333-f5097a1a2368","Type":"ContainerDied","Data":"679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272"} Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.638943 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"798ca689-d69d-488c-b333-f5097a1a2368","Type":"ContainerDied","Data":"eb23cb6ba67c1cd5ccbce72422fad02451c6f8711d9111431ac4f65b12c75c55"} Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.644909 4482 generic.go:334] "Generic (PLEG): container finished" podID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerID="8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e" exitCode=0 Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.644938 4482 generic.go:334] "Generic (PLEG): container finished" podID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerID="9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1" exitCode=2 Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.644948 4482 generic.go:334] "Generic (PLEG): container finished" podID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerID="353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00" exitCode=0 Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.644969 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e7d3714-955e-451b-a10b-7a685d9484f1","Type":"ContainerDied","Data":"8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e"} Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.644997 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e7d3714-955e-451b-a10b-7a685d9484f1","Type":"ContainerDied","Data":"9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1"} Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.645005 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e7d3714-955e-451b-a10b-7a685d9484f1","Type":"ContainerDied","Data":"353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00"} Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.663726 4482 scope.go:117] "RemoveContainer" containerID="0467136d9dcd7c16d6f9693799dbc3ad5044cdc518e62251a090f4263c387451" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.671560 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.676595 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.683027 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.686147 4482 scope.go:117] "RemoveContainer" containerID="679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.687553 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.751733 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:23 crc kubenswrapper[4482]: E1125 07:05:23.753134 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798ca689-d69d-488c-b333-f5097a1a2368" containerName="nova-api-log" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.753160 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="798ca689-d69d-488c-b333-f5097a1a2368" containerName="nova-api-log" Nov 25 07:05:23 crc kubenswrapper[4482]: E1125 07:05:23.753242 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afed4167-c22e-402c-9fc3-89eb3b1f22ee" containerName="nova-metadata-log" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.753251 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="afed4167-c22e-402c-9fc3-89eb3b1f22ee" containerName="nova-metadata-log" Nov 25 07:05:23 crc kubenswrapper[4482]: E1125 07:05:23.753274 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afed4167-c22e-402c-9fc3-89eb3b1f22ee" containerName="nova-metadata-metadata" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.753281 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="afed4167-c22e-402c-9fc3-89eb3b1f22ee" containerName="nova-metadata-metadata" Nov 25 07:05:23 crc kubenswrapper[4482]: E1125 07:05:23.753302 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798ca689-d69d-488c-b333-f5097a1a2368" containerName="nova-api-api" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.753308 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="798ca689-d69d-488c-b333-f5097a1a2368" containerName="nova-api-api" Nov 25 07:05:23 crc kubenswrapper[4482]: E1125 07:05:23.753320 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1909a799-3429-4fe2-adca-d756ae0c7c59" containerName="nova-manage" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.753332 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="1909a799-3429-4fe2-adca-d756ae0c7c59" containerName="nova-manage" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.753870 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="afed4167-c22e-402c-9fc3-89eb3b1f22ee" containerName="nova-metadata-metadata" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.753917 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="798ca689-d69d-488c-b333-f5097a1a2368" containerName="nova-api-log" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.753932 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="798ca689-d69d-488c-b333-f5097a1a2368" containerName="nova-api-api" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.753950 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="1909a799-3429-4fe2-adca-d756ae0c7c59" containerName="nova-manage" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.753964 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="afed4167-c22e-402c-9fc3-89eb3b1f22ee" containerName="nova-metadata-log" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.775241 4482 scope.go:117] "RemoveContainer" containerID="10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.784316 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.796342 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.843047 4482 scope.go:117] "RemoveContainer" containerID="679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272" Nov 25 07:05:23 crc kubenswrapper[4482]: E1125 07:05:23.845682 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272\": container with ID starting with 679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272 not found: ID does not exist" containerID="679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.845721 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272"} err="failed to get container status \"679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272\": rpc error: code = NotFound desc = could not find container \"679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272\": container with ID starting with 679bfa0ae5fc1e28ac3dca9abe9504744dd085c497a65ceb445d9adde7bde272 not found: ID does not exist" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.845744 4482 scope.go:117] "RemoveContainer" containerID="10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9" Nov 25 07:05:23 crc kubenswrapper[4482]: E1125 07:05:23.846009 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9\": container with ID starting with 10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9 not found: ID does not exist" containerID="10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.846027 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9"} err="failed to get container status \"10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9\": rpc error: code = NotFound desc = could not find container \"10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9\": container with ID starting with 10f127cbc008f39adcadb3ad29ef695497e23941aa2e706c3e24736bb9c21ab9 not found: ID does not exist" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.846967 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83225fe0-ff09-448e-be31-e7b06a13d7c8-logs\") pod \"nova-api-0\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.847043 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74xqq\" (UniqueName: \"kubernetes.io/projected/83225fe0-ff09-448e-be31-e7b06a13d7c8-kube-api-access-74xqq\") pod \"nova-api-0\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.847365 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83225fe0-ff09-448e-be31-e7b06a13d7c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.847403 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83225fe0-ff09-448e-be31-e7b06a13d7c8-config-data\") pod \"nova-api-0\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.858472 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798ca689-d69d-488c-b333-f5097a1a2368" path="/var/lib/kubelet/pods/798ca689-d69d-488c-b333-f5097a1a2368/volumes" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.859064 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afed4167-c22e-402c-9fc3-89eb3b1f22ee" path="/var/lib/kubelet/pods/afed4167-c22e-402c-9fc3-89eb3b1f22ee/volumes" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.859631 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.861569 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.861700 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.863685 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.863975 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.864140 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.949103 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67bf34f2-664a-4065-88a4-115114e4d445-logs\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.949149 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83225fe0-ff09-448e-be31-e7b06a13d7c8-config-data\") pod \"nova-api-0\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.949219 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83225fe0-ff09-448e-be31-e7b06a13d7c8-logs\") pod \"nova-api-0\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.949237 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-config-data\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.949262 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn25c\" (UniqueName: \"kubernetes.io/projected/67bf34f2-664a-4065-88a4-115114e4d445-kube-api-access-wn25c\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.949302 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.949324 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74xqq\" (UniqueName: \"kubernetes.io/projected/83225fe0-ff09-448e-be31-e7b06a13d7c8-kube-api-access-74xqq\") pod \"nova-api-0\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.949698 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83225fe0-ff09-448e-be31-e7b06a13d7c8-logs\") pod \"nova-api-0\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.949791 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.949864 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83225fe0-ff09-448e-be31-e7b06a13d7c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.953250 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83225fe0-ff09-448e-be31-e7b06a13d7c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.953306 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83225fe0-ff09-448e-be31-e7b06a13d7c8-config-data\") pod \"nova-api-0\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " pod="openstack/nova-api-0" Nov 25 07:05:23 crc kubenswrapper[4482]: I1125 07:05:23.972303 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74xqq\" (UniqueName: \"kubernetes.io/projected/83225fe0-ff09-448e-be31-e7b06a13d7c8-kube-api-access-74xqq\") pod \"nova-api-0\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " pod="openstack/nova-api-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.051779 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-config-data\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.051838 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn25c\" (UniqueName: \"kubernetes.io/projected/67bf34f2-664a-4065-88a4-115114e4d445-kube-api-access-wn25c\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.051875 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.051919 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.052014 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67bf34f2-664a-4065-88a4-115114e4d445-logs\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.052508 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67bf34f2-664a-4065-88a4-115114e4d445-logs\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.055646 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-config-data\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.056455 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.057838 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.068565 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn25c\" (UniqueName: \"kubernetes.io/projected/67bf34f2-664a-4065-88a4-115114e4d445-kube-api-access-wn25c\") pod \"nova-metadata-0\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " pod="openstack/nova-metadata-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.126034 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.178825 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.588815 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.664356 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:05:24 crc kubenswrapper[4482]: I1125 07:05:24.665669 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83225fe0-ff09-448e-be31-e7b06a13d7c8","Type":"ContainerStarted","Data":"7f2f20572574ae83097fb44eb43b82ed8e02d70506bfce7da3a6541c86be5d84"} Nov 25 07:05:24 crc kubenswrapper[4482]: W1125 07:05:24.675884 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67bf34f2_664a_4065_88a4_115114e4d445.slice/crio-531965f33bf74458edf889548833a21588fa3654b56b5cee164b0825dd4ab4dc WatchSource:0}: Error finding container 531965f33bf74458edf889548833a21588fa3654b56b5cee164b0825dd4ab4dc: Status 404 returned error can't find the container with id 531965f33bf74458edf889548833a21588fa3654b56b5cee164b0825dd4ab4dc Nov 25 07:05:25 crc kubenswrapper[4482]: I1125 07:05:25.679410 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67bf34f2-664a-4065-88a4-115114e4d445","Type":"ContainerStarted","Data":"778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6"} Nov 25 07:05:25 crc kubenswrapper[4482]: I1125 07:05:25.680121 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67bf34f2-664a-4065-88a4-115114e4d445","Type":"ContainerStarted","Data":"fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867"} Nov 25 07:05:25 crc kubenswrapper[4482]: I1125 07:05:25.680136 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67bf34f2-664a-4065-88a4-115114e4d445","Type":"ContainerStarted","Data":"531965f33bf74458edf889548833a21588fa3654b56b5cee164b0825dd4ab4dc"} Nov 25 07:05:25 crc kubenswrapper[4482]: I1125 07:05:25.683967 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83225fe0-ff09-448e-be31-e7b06a13d7c8","Type":"ContainerStarted","Data":"b632122a419bb25bca1c35ac87843ab477aca2e95ff263c997164a3336b9a384"} Nov 25 07:05:25 crc kubenswrapper[4482]: I1125 07:05:25.684040 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83225fe0-ff09-448e-be31-e7b06a13d7c8","Type":"ContainerStarted","Data":"d95cbb2b02673d1e482e88fb1f958a49dd836adcc5f91fd9b5a4e458ed89eafa"} Nov 25 07:05:25 crc kubenswrapper[4482]: I1125 07:05:25.703076 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.703064266 podStartE2EDuration="2.703064266s" podCreationTimestamp="2025-11-25 07:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:25.696415082 +0000 UTC m=+1100.184646341" watchObservedRunningTime="2025-11-25 07:05:25.703064266 +0000 UTC m=+1100.191295526" Nov 25 07:05:25 crc kubenswrapper[4482]: I1125 07:05:25.721776 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.72174872 podStartE2EDuration="2.72174872s" podCreationTimestamp="2025-11-25 07:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:25.715128469 +0000 UTC m=+1100.203359729" watchObservedRunningTime="2025-11-25 07:05:25.72174872 +0000 UTC m=+1100.209979969" Nov 25 07:05:26 crc kubenswrapper[4482]: I1125 07:05:26.677807 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 25 07:05:26 crc kubenswrapper[4482]: I1125 07:05:26.867016 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.689102 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.707393 4482 generic.go:334] "Generic (PLEG): container finished" podID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerID="fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b" exitCode=0 Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.707462 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.707456 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e7d3714-955e-451b-a10b-7a685d9484f1","Type":"ContainerDied","Data":"fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b"} Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.707645 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e7d3714-955e-451b-a10b-7a685d9484f1","Type":"ContainerDied","Data":"3832f657866b51da7d7aefc7736dc3764c4b2c7b1d98f97cda8aab703070e039"} Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.707672 4482 scope.go:117] "RemoveContainer" containerID="8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.746204 4482 scope.go:117] "RemoveContainer" containerID="9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.781904 4482 scope.go:117] "RemoveContainer" containerID="353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.799999 4482 scope.go:117] "RemoveContainer" containerID="fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.821920 4482 scope.go:117] "RemoveContainer" containerID="8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e" Nov 25 07:05:27 crc kubenswrapper[4482]: E1125 07:05:27.822512 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e\": container with ID starting with 8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e not found: ID does not exist" containerID="8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.822544 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e"} err="failed to get container status \"8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e\": rpc error: code = NotFound desc = could not find container \"8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e\": container with ID starting with 8438232233fe2bfbb9ce62c3dfd589aa8089dc4795de1ded310e6ffbed88637e not found: ID does not exist" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.822565 4482 scope.go:117] "RemoveContainer" containerID="9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1" Nov 25 07:05:27 crc kubenswrapper[4482]: E1125 07:05:27.822868 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1\": container with ID starting with 9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1 not found: ID does not exist" containerID="9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.822899 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1"} err="failed to get container status \"9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1\": rpc error: code = NotFound desc = could not find container \"9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1\": container with ID starting with 9ecf9fc0fedeb973b39670be104ab2463d6a44479ed856d85fac574516a75bc1 not found: ID does not exist" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.822918 4482 scope.go:117] "RemoveContainer" containerID="353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00" Nov 25 07:05:27 crc kubenswrapper[4482]: E1125 07:05:27.823222 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00\": container with ID starting with 353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00 not found: ID does not exist" containerID="353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.823245 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00"} err="failed to get container status \"353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00\": rpc error: code = NotFound desc = could not find container \"353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00\": container with ID starting with 353867f0bb2db3b28801c994081d3166c5865754f32374c667c3977742fb3d00 not found: ID does not exist" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.823261 4482 scope.go:117] "RemoveContainer" containerID="fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b" Nov 25 07:05:27 crc kubenswrapper[4482]: E1125 07:05:27.823467 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b\": container with ID starting with fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b not found: ID does not exist" containerID="fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.823487 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b"} err="failed to get container status \"fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b\": rpc error: code = NotFound desc = could not find container \"fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b\": container with ID starting with fe4bcab904122620841783ba66adf86447eea21203479b2804351a2ab838531b not found: ID does not exist" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.842162 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-sg-core-conf-yaml\") pod \"2e7d3714-955e-451b-a10b-7a685d9484f1\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.842295 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e7d3714-955e-451b-a10b-7a685d9484f1-log-httpd\") pod \"2e7d3714-955e-451b-a10b-7a685d9484f1\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.842459 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e7d3714-955e-451b-a10b-7a685d9484f1-run-httpd\") pod \"2e7d3714-955e-451b-a10b-7a685d9484f1\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.842492 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-scripts\") pod \"2e7d3714-955e-451b-a10b-7a685d9484f1\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.842543 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-config-data\") pod \"2e7d3714-955e-451b-a10b-7a685d9484f1\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.842573 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-combined-ca-bundle\") pod \"2e7d3714-955e-451b-a10b-7a685d9484f1\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.842650 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k7fb\" (UniqueName: \"kubernetes.io/projected/2e7d3714-955e-451b-a10b-7a685d9484f1-kube-api-access-4k7fb\") pod \"2e7d3714-955e-451b-a10b-7a685d9484f1\" (UID: \"2e7d3714-955e-451b-a10b-7a685d9484f1\") " Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.842845 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e7d3714-955e-451b-a10b-7a685d9484f1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2e7d3714-955e-451b-a10b-7a685d9484f1" (UID: "2e7d3714-955e-451b-a10b-7a685d9484f1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.843050 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e7d3714-955e-451b-a10b-7a685d9484f1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2e7d3714-955e-451b-a10b-7a685d9484f1" (UID: "2e7d3714-955e-451b-a10b-7a685d9484f1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.843138 4482 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e7d3714-955e-451b-a10b-7a685d9484f1-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.843154 4482 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e7d3714-955e-451b-a10b-7a685d9484f1-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.857910 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e7d3714-955e-451b-a10b-7a685d9484f1-kube-api-access-4k7fb" (OuterVolumeSpecName: "kube-api-access-4k7fb") pod "2e7d3714-955e-451b-a10b-7a685d9484f1" (UID: "2e7d3714-955e-451b-a10b-7a685d9484f1"). InnerVolumeSpecName "kube-api-access-4k7fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.866831 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-scripts" (OuterVolumeSpecName: "scripts") pod "2e7d3714-955e-451b-a10b-7a685d9484f1" (UID: "2e7d3714-955e-451b-a10b-7a685d9484f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.875756 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2e7d3714-955e-451b-a10b-7a685d9484f1" (UID: "2e7d3714-955e-451b-a10b-7a685d9484f1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.909802 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e7d3714-955e-451b-a10b-7a685d9484f1" (UID: "2e7d3714-955e-451b-a10b-7a685d9484f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.927360 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-config-data" (OuterVolumeSpecName: "config-data") pod "2e7d3714-955e-451b-a10b-7a685d9484f1" (UID: "2e7d3714-955e-451b-a10b-7a685d9484f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.944894 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.944922 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k7fb\" (UniqueName: \"kubernetes.io/projected/2e7d3714-955e-451b-a10b-7a685d9484f1-kube-api-access-4k7fb\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.945038 4482 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.945360 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:27 crc kubenswrapper[4482]: I1125 07:05:27.945378 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7d3714-955e-451b-a10b-7a685d9484f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.042861 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.049060 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.060017 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:28 crc kubenswrapper[4482]: E1125 07:05:28.060530 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="sg-core" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.060606 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="sg-core" Nov 25 07:05:28 crc kubenswrapper[4482]: E1125 07:05:28.060667 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="ceilometer-notification-agent" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.060714 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="ceilometer-notification-agent" Nov 25 07:05:28 crc kubenswrapper[4482]: E1125 07:05:28.060784 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="ceilometer-central-agent" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.060833 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="ceilometer-central-agent" Nov 25 07:05:28 crc kubenswrapper[4482]: E1125 07:05:28.060880 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="proxy-httpd" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.060924 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="proxy-httpd" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.061136 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="proxy-httpd" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.061223 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="ceilometer-notification-agent" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.061291 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="sg-core" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.061362 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" containerName="ceilometer-central-agent" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.062933 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.069190 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.069190 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.069540 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.076623 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.149731 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-log-httpd\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.149797 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.149864 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssbjg\" (UniqueName: \"kubernetes.io/projected/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-kube-api-access-ssbjg\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.149893 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.149935 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-scripts\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.150004 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-run-httpd\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.150035 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-config-data\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.150305 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.252783 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-log-httpd\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.252827 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.252870 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssbjg\" (UniqueName: \"kubernetes.io/projected/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-kube-api-access-ssbjg\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.252902 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.252926 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-scripts\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.252956 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-run-httpd\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.252990 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-config-data\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.253021 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.253387 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-log-httpd\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.253512 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-run-httpd\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.258244 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-scripts\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.258838 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.259032 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.259605 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-config-data\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.260572 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.274714 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssbjg\" (UniqueName: \"kubernetes.io/projected/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-kube-api-access-ssbjg\") pod \"ceilometer-0\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.378043 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:05:28 crc kubenswrapper[4482]: I1125 07:05:28.842161 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:29 crc kubenswrapper[4482]: I1125 07:05:29.180397 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 07:05:29 crc kubenswrapper[4482]: I1125 07:05:29.180468 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 07:05:29 crc kubenswrapper[4482]: I1125 07:05:29.731819 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6","Type":"ContainerStarted","Data":"37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898"} Nov 25 07:05:29 crc kubenswrapper[4482]: I1125 07:05:29.732083 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6","Type":"ContainerStarted","Data":"943a4659d50064d2813a6d0259b1e3b7c1970ef7da0c348a486c5694d6ef5f56"} Nov 25 07:05:29 crc kubenswrapper[4482]: I1125 07:05:29.911814 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e7d3714-955e-451b-a10b-7a685d9484f1" path="/var/lib/kubelet/pods/2e7d3714-955e-451b-a10b-7a685d9484f1/volumes" Nov 25 07:05:30 crc kubenswrapper[4482]: I1125 07:05:30.745809 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6","Type":"ContainerStarted","Data":"85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c"} Nov 25 07:05:31 crc kubenswrapper[4482]: I1125 07:05:31.758837 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6","Type":"ContainerStarted","Data":"1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106"} Nov 25 07:05:32 crc kubenswrapper[4482]: I1125 07:05:32.768628 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6","Type":"ContainerStarted","Data":"a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669"} Nov 25 07:05:32 crc kubenswrapper[4482]: I1125 07:05:32.769011 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 07:05:32 crc kubenswrapper[4482]: I1125 07:05:32.786633 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.1402053269999999 podStartE2EDuration="4.786617335s" podCreationTimestamp="2025-11-25 07:05:28 +0000 UTC" firstStartedPulling="2025-11-25 07:05:28.848161727 +0000 UTC m=+1103.336392987" lastFinishedPulling="2025-11-25 07:05:32.494573736 +0000 UTC m=+1106.982804995" observedRunningTime="2025-11-25 07:05:32.783155839 +0000 UTC m=+1107.271387098" watchObservedRunningTime="2025-11-25 07:05:32.786617335 +0000 UTC m=+1107.274848594" Nov 25 07:05:34 crc kubenswrapper[4482]: I1125 07:05:34.128379 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 07:05:34 crc kubenswrapper[4482]: I1125 07:05:34.128434 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 07:05:34 crc kubenswrapper[4482]: I1125 07:05:34.179909 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 07:05:34 crc kubenswrapper[4482]: I1125 07:05:34.180934 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 07:05:35 crc kubenswrapper[4482]: I1125 07:05:35.213663 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="83225fe0-ff09-448e-be31-e7b06a13d7c8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 07:05:35 crc kubenswrapper[4482]: I1125 07:05:35.214459 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="83225fe0-ff09-448e-be31-e7b06a13d7c8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 07:05:35 crc kubenswrapper[4482]: I1125 07:05:35.226636 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="67bf34f2-664a-4065-88a4-115114e4d445" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 07:05:35 crc kubenswrapper[4482]: I1125 07:05:35.226756 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="67bf34f2-664a-4065-88a4-115114e4d445" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 07:05:39 crc kubenswrapper[4482]: I1125 07:05:39.117973 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:05:39 crc kubenswrapper[4482]: I1125 07:05:39.118590 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:05:41 crc kubenswrapper[4482]: I1125 07:05:41.869364 4482 generic.go:334] "Generic (PLEG): container finished" podID="d227e6f6-3610-4db4-a5d1-b60bb5285194" containerID="98b36de37d32104b8615e400e1fc197432e56a59afd50002c6951eeabcfa5ab4" exitCode=137 Nov 25 07:05:41 crc kubenswrapper[4482]: I1125 07:05:41.870189 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d227e6f6-3610-4db4-a5d1-b60bb5285194","Type":"ContainerDied","Data":"98b36de37d32104b8615e400e1fc197432e56a59afd50002c6951eeabcfa5ab4"} Nov 25 07:05:41 crc kubenswrapper[4482]: I1125 07:05:41.888413 4482 generic.go:334] "Generic (PLEG): container finished" podID="0295ea9f-b4e8-435d-9c64-e0c02c3defa9" containerID="2da1b60b5c057ac5c7b37fd93a1120484a789e08e0bffd9cdca6af5cb535401a" exitCode=137 Nov 25 07:05:41 crc kubenswrapper[4482]: I1125 07:05:41.888452 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0295ea9f-b4e8-435d-9c64-e0c02c3defa9","Type":"ContainerDied","Data":"2da1b60b5c057ac5c7b37fd93a1120484a789e08e0bffd9cdca6af5cb535401a"} Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.186338 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.191312 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.387545 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d227e6f6-3610-4db4-a5d1-b60bb5285194-config-data\") pod \"d227e6f6-3610-4db4-a5d1-b60bb5285194\" (UID: \"d227e6f6-3610-4db4-a5d1-b60bb5285194\") " Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.387840 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-combined-ca-bundle\") pod \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\" (UID: \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\") " Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.387922 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvjst\" (UniqueName: \"kubernetes.io/projected/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-kube-api-access-qvjst\") pod \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\" (UID: \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\") " Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.387946 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d227e6f6-3610-4db4-a5d1-b60bb5285194-combined-ca-bundle\") pod \"d227e6f6-3610-4db4-a5d1-b60bb5285194\" (UID: \"d227e6f6-3610-4db4-a5d1-b60bb5285194\") " Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.388043 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltc4r\" (UniqueName: \"kubernetes.io/projected/d227e6f6-3610-4db4-a5d1-b60bb5285194-kube-api-access-ltc4r\") pod \"d227e6f6-3610-4db4-a5d1-b60bb5285194\" (UID: \"d227e6f6-3610-4db4-a5d1-b60bb5285194\") " Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.388181 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-config-data\") pod \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\" (UID: \"0295ea9f-b4e8-435d-9c64-e0c02c3defa9\") " Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.407360 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-kube-api-access-qvjst" (OuterVolumeSpecName: "kube-api-access-qvjst") pod "0295ea9f-b4e8-435d-9c64-e0c02c3defa9" (UID: "0295ea9f-b4e8-435d-9c64-e0c02c3defa9"). InnerVolumeSpecName "kube-api-access-qvjst". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.410354 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d227e6f6-3610-4db4-a5d1-b60bb5285194-kube-api-access-ltc4r" (OuterVolumeSpecName: "kube-api-access-ltc4r") pod "d227e6f6-3610-4db4-a5d1-b60bb5285194" (UID: "d227e6f6-3610-4db4-a5d1-b60bb5285194"). InnerVolumeSpecName "kube-api-access-ltc4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.438663 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d227e6f6-3610-4db4-a5d1-b60bb5285194-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d227e6f6-3610-4db4-a5d1-b60bb5285194" (UID: "d227e6f6-3610-4db4-a5d1-b60bb5285194"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.439629 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0295ea9f-b4e8-435d-9c64-e0c02c3defa9" (UID: "0295ea9f-b4e8-435d-9c64-e0c02c3defa9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.439776 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-config-data" (OuterVolumeSpecName: "config-data") pod "0295ea9f-b4e8-435d-9c64-e0c02c3defa9" (UID: "0295ea9f-b4e8-435d-9c64-e0c02c3defa9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.446363 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d227e6f6-3610-4db4-a5d1-b60bb5285194-config-data" (OuterVolumeSpecName: "config-data") pod "d227e6f6-3610-4db4-a5d1-b60bb5285194" (UID: "d227e6f6-3610-4db4-a5d1-b60bb5285194"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.491084 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.491116 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d227e6f6-3610-4db4-a5d1-b60bb5285194-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.491126 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.491138 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvjst\" (UniqueName: \"kubernetes.io/projected/0295ea9f-b4e8-435d-9c64-e0c02c3defa9-kube-api-access-qvjst\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.491147 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d227e6f6-3610-4db4-a5d1-b60bb5285194-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.491155 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltc4r\" (UniqueName: \"kubernetes.io/projected/d227e6f6-3610-4db4-a5d1-b60bb5285194-kube-api-access-ltc4r\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.900719 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d227e6f6-3610-4db4-a5d1-b60bb5285194","Type":"ContainerDied","Data":"a1b995a7250703bbe5ad9caa8eb1feb37e30e8d081c37cbc6a412d8ab551e68a"} Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.900797 4482 scope.go:117] "RemoveContainer" containerID="98b36de37d32104b8615e400e1fc197432e56a59afd50002c6951eeabcfa5ab4" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.901448 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.903945 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0295ea9f-b4e8-435d-9c64-e0c02c3defa9","Type":"ContainerDied","Data":"14a1a630676e63ef7d3ff062c156a1d363a5d64cd9741249df26095d69e2d3e9"} Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.904030 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.930405 4482 scope.go:117] "RemoveContainer" containerID="2da1b60b5c057ac5c7b37fd93a1120484a789e08e0bffd9cdca6af5cb535401a" Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.950111 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.975684 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:05:42 crc kubenswrapper[4482]: I1125 07:05:42.997222 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.020268 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.029302 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:05:43 crc kubenswrapper[4482]: E1125 07:05:43.029757 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0295ea9f-b4e8-435d-9c64-e0c02c3defa9" containerName="nova-scheduler-scheduler" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.029776 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0295ea9f-b4e8-435d-9c64-e0c02c3defa9" containerName="nova-scheduler-scheduler" Nov 25 07:05:43 crc kubenswrapper[4482]: E1125 07:05:43.029801 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d227e6f6-3610-4db4-a5d1-b60bb5285194" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.029807 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="d227e6f6-3610-4db4-a5d1-b60bb5285194" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.030031 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="0295ea9f-b4e8-435d-9c64-e0c02c3defa9" containerName="nova-scheduler-scheduler" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.030057 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="d227e6f6-3610-4db4-a5d1-b60bb5285194" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.030677 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.035634 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.035893 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.036022 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.042501 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.059439 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.060654 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.068401 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.068717 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.109087 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827243a4-101f-49ab-8219-24fae0a7ea82-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"827243a4-101f-49ab-8219-24fae0a7ea82\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.109140 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.109186 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h6zr\" (UniqueName: \"kubernetes.io/projected/827243a4-101f-49ab-8219-24fae0a7ea82-kube-api-access-7h6zr\") pod \"nova-scheduler-0\" (UID: \"827243a4-101f-49ab-8219-24fae0a7ea82\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.109203 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t455n\" (UniqueName: \"kubernetes.io/projected/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-kube-api-access-t455n\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.109234 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.109293 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.109398 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827243a4-101f-49ab-8219-24fae0a7ea82-config-data\") pod \"nova-scheduler-0\" (UID: \"827243a4-101f-49ab-8219-24fae0a7ea82\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.109456 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.211316 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.211420 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h6zr\" (UniqueName: \"kubernetes.io/projected/827243a4-101f-49ab-8219-24fae0a7ea82-kube-api-access-7h6zr\") pod \"nova-scheduler-0\" (UID: \"827243a4-101f-49ab-8219-24fae0a7ea82\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.211443 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t455n\" (UniqueName: \"kubernetes.io/projected/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-kube-api-access-t455n\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.211483 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.211527 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.211750 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827243a4-101f-49ab-8219-24fae0a7ea82-config-data\") pod \"nova-scheduler-0\" (UID: \"827243a4-101f-49ab-8219-24fae0a7ea82\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.211863 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.211942 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827243a4-101f-49ab-8219-24fae0a7ea82-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"827243a4-101f-49ab-8219-24fae0a7ea82\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.226963 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.227003 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.227809 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h6zr\" (UniqueName: \"kubernetes.io/projected/827243a4-101f-49ab-8219-24fae0a7ea82-kube-api-access-7h6zr\") pod \"nova-scheduler-0\" (UID: \"827243a4-101f-49ab-8219-24fae0a7ea82\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.229717 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827243a4-101f-49ab-8219-24fae0a7ea82-config-data\") pod \"nova-scheduler-0\" (UID: \"827243a4-101f-49ab-8219-24fae0a7ea82\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.233688 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.235517 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t455n\" (UniqueName: \"kubernetes.io/projected/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-kube-api-access-t455n\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.245664 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827243a4-101f-49ab-8219-24fae0a7ea82-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"827243a4-101f-49ab-8219-24fae0a7ea82\") " pod="openstack/nova-scheduler-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.246770 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3062adf8-d13f-443b-bb06-1ca8d8b2edd2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3062adf8-d13f-443b-bb06-1ca8d8b2edd2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.350582 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.385727 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.606889 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.723473 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-config-data-custom\") pod \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.723627 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-config-data\") pod \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.723685 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq2t4\" (UniqueName: \"kubernetes.io/projected/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-kube-api-access-vq2t4\") pod \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.724366 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-combined-ca-bundle\") pod \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\" (UID: \"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2\") " Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.730529 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2" (UID: "59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.732418 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-kube-api-access-vq2t4" (OuterVolumeSpecName: "kube-api-access-vq2t4") pod "59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2" (UID: "59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2"). InnerVolumeSpecName "kube-api-access-vq2t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.762339 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2" (UID: "59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.776056 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-config-data" (OuterVolumeSpecName: "config-data") pod "59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2" (UID: "59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.828368 4482 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.828399 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.828409 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq2t4\" (UniqueName: \"kubernetes.io/projected/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-kube-api-access-vq2t4\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.828418 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.841439 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0295ea9f-b4e8-435d-9c64-e0c02c3defa9" path="/var/lib/kubelet/pods/0295ea9f-b4e8-435d-9c64-e0c02c3defa9/volumes" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.841962 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d227e6f6-3610-4db4-a5d1-b60bb5285194" path="/var/lib/kubelet/pods/d227e6f6-3610-4db4-a5d1-b60bb5285194/volumes" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.867524 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.948451 4482 generic.go:334] "Generic (PLEG): container finished" podID="59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2" containerID="0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad" exitCode=137 Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.948668 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f98797bb6-chb76" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.949756 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f98797bb6-chb76" event={"ID":"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2","Type":"ContainerDied","Data":"0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad"} Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.949885 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f98797bb6-chb76" event={"ID":"59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2","Type":"ContainerDied","Data":"61465e247380cd5be8f0901fa7f72a34e7d1faf428f3a6fe2658bf41dc8896c0"} Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.949954 4482 scope.go:117] "RemoveContainer" containerID="0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad" Nov 25 07:05:43 crc kubenswrapper[4482]: I1125 07:05:43.951647 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3062adf8-d13f-443b-bb06-1ca8d8b2edd2","Type":"ContainerStarted","Data":"ef23c3b1a6762a89fbe4f2acb0321f68be36f7c2715477025304c895e8d0fc2d"} Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.009017 4482 scope.go:117] "RemoveContainer" containerID="0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad" Nov 25 07:05:44 crc kubenswrapper[4482]: E1125 07:05:44.011991 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad\": container with ID starting with 0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad not found: ID does not exist" containerID="0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.012047 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad"} err="failed to get container status \"0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad\": rpc error: code = NotFound desc = could not find container \"0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad\": container with ID starting with 0e9424d4a7c61488cb893f9525602fe04c35fc4abb72b3457b70a61c7bf4e7ad not found: ID does not exist" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.029128 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.040034 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f98797bb6-chb76"] Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.050815 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6f98797bb6-chb76"] Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.130927 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.131022 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.131641 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.131856 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.135987 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.136199 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.573003 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86bf444cbf-szzdl"] Nov 25 07:05:44 crc kubenswrapper[4482]: E1125 07:05:44.573463 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2" containerName="heat-cfnapi" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.573483 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2" containerName="heat-cfnapi" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.573682 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2" containerName="heat-cfnapi" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.574677 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.609914 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bf444cbf-szzdl"] Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.668976 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-ovsdbserver-nb\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.669063 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-dns-svc\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.669154 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-config\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.669470 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l69zx\" (UniqueName: \"kubernetes.io/projected/fcdb3d0c-8d88-49e0-b213-703b54444699-kube-api-access-l69zx\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.669573 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-dns-swift-storage-0\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.669635 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-ovsdbserver-sb\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.671847 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.695163 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.698344 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.774193 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l69zx\" (UniqueName: \"kubernetes.io/projected/fcdb3d0c-8d88-49e0-b213-703b54444699-kube-api-access-l69zx\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.774730 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-dns-swift-storage-0\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.774849 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-ovsdbserver-sb\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.775051 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-ovsdbserver-nb\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.775150 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-dns-svc\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.775306 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-config\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.778184 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-dns-swift-storage-0\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.778388 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-config\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.778887 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-ovsdbserver-sb\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.779421 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-ovsdbserver-nb\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.779924 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-dns-svc\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.792795 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l69zx\" (UniqueName: \"kubernetes.io/projected/fcdb3d0c-8d88-49e0-b213-703b54444699-kube-api-access-l69zx\") pod \"dnsmasq-dns-86bf444cbf-szzdl\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.890105 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.986773 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3062adf8-d13f-443b-bb06-1ca8d8b2edd2","Type":"ContainerStarted","Data":"7d2847817b314dd70e132945b106eb7180c1f41b1d6540ec0502313bd88a11b6"} Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.996089 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"827243a4-101f-49ab-8219-24fae0a7ea82","Type":"ContainerStarted","Data":"38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa"} Nov 25 07:05:44 crc kubenswrapper[4482]: I1125 07:05:44.996120 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"827243a4-101f-49ab-8219-24fae0a7ea82","Type":"ContainerStarted","Data":"f39a4891ee2e90ba00c1c2f68e7287b75fff9294421af995ee8f2f99dd48da0d"} Nov 25 07:05:45 crc kubenswrapper[4482]: I1125 07:05:45.012352 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 07:05:45 crc kubenswrapper[4482]: I1125 07:05:45.012648 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.012631599 podStartE2EDuration="3.012631599s" podCreationTimestamp="2025-11-25 07:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:45.010339748 +0000 UTC m=+1119.498571007" watchObservedRunningTime="2025-11-25 07:05:45.012631599 +0000 UTC m=+1119.500862858" Nov 25 07:05:45 crc kubenswrapper[4482]: I1125 07:05:45.029379 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.029359384 podStartE2EDuration="3.029359384s" podCreationTimestamp="2025-11-25 07:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:45.026968527 +0000 UTC m=+1119.515199786" watchObservedRunningTime="2025-11-25 07:05:45.029359384 +0000 UTC m=+1119.517590633" Nov 25 07:05:45 crc kubenswrapper[4482]: I1125 07:05:45.516200 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bf444cbf-szzdl"] Nov 25 07:05:45 crc kubenswrapper[4482]: I1125 07:05:45.845889 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2" path="/var/lib/kubelet/pods/59c42e2e-5faf-48d7-b8ca-5d3db6b03bb2/volumes" Nov 25 07:05:46 crc kubenswrapper[4482]: I1125 07:05:46.005073 4482 generic.go:334] "Generic (PLEG): container finished" podID="fcdb3d0c-8d88-49e0-b213-703b54444699" containerID="a8b9654fbd181e4336061e77b6962fbefc76916f4d36bd7548d76d292a43a0ea" exitCode=0 Nov 25 07:05:46 crc kubenswrapper[4482]: I1125 07:05:46.007744 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" event={"ID":"fcdb3d0c-8d88-49e0-b213-703b54444699","Type":"ContainerDied","Data":"a8b9654fbd181e4336061e77b6962fbefc76916f4d36bd7548d76d292a43a0ea"} Nov 25 07:05:46 crc kubenswrapper[4482]: I1125 07:05:46.007779 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" event={"ID":"fcdb3d0c-8d88-49e0-b213-703b54444699","Type":"ContainerStarted","Data":"5c5ed78306e7d45bfe50a8beaa1ec811c76c637952402efd9dcaf5d4fffa339d"} Nov 25 07:05:47 crc kubenswrapper[4482]: I1125 07:05:47.020032 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" event={"ID":"fcdb3d0c-8d88-49e0-b213-703b54444699","Type":"ContainerStarted","Data":"d689afe052284a34af6acd3a9315547af9939279cb32e83f47259f5b91433500"} Nov 25 07:05:47 crc kubenswrapper[4482]: I1125 07:05:47.020730 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:47 crc kubenswrapper[4482]: I1125 07:05:47.042781 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" podStartSLOduration=3.042769586 podStartE2EDuration="3.042769586s" podCreationTimestamp="2025-11-25 07:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:47.039210055 +0000 UTC m=+1121.527441313" watchObservedRunningTime="2025-11-25 07:05:47.042769586 +0000 UTC m=+1121.531000845" Nov 25 07:05:47 crc kubenswrapper[4482]: I1125 07:05:47.177294 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:47 crc kubenswrapper[4482]: I1125 07:05:47.177589 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83225fe0-ff09-448e-be31-e7b06a13d7c8" containerName="nova-api-log" containerID="cri-o://d95cbb2b02673d1e482e88fb1f958a49dd836adcc5f91fd9b5a4e458ed89eafa" gracePeriod=30 Nov 25 07:05:47 crc kubenswrapper[4482]: I1125 07:05:47.177680 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="83225fe0-ff09-448e-be31-e7b06a13d7c8" containerName="nova-api-api" containerID="cri-o://b632122a419bb25bca1c35ac87843ab477aca2e95ff263c997164a3336b9a384" gracePeriod=30 Nov 25 07:05:47 crc kubenswrapper[4482]: I1125 07:05:47.266570 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:47 crc kubenswrapper[4482]: I1125 07:05:47.266934 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="ceilometer-central-agent" containerID="cri-o://37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898" gracePeriod=30 Nov 25 07:05:47 crc kubenswrapper[4482]: I1125 07:05:47.267008 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="sg-core" containerID="cri-o://1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106" gracePeriod=30 Nov 25 07:05:47 crc kubenswrapper[4482]: I1125 07:05:47.266978 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="ceilometer-notification-agent" containerID="cri-o://85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c" gracePeriod=30 Nov 25 07:05:47 crc kubenswrapper[4482]: I1125 07:05:47.266994 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="proxy-httpd" containerID="cri-o://a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669" gracePeriod=30 Nov 25 07:05:47 crc kubenswrapper[4482]: I1125 07:05:47.365528 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.207:3000/\": EOF" Nov 25 07:05:48 crc kubenswrapper[4482]: I1125 07:05:48.028275 4482 generic.go:334] "Generic (PLEG): container finished" podID="83225fe0-ff09-448e-be31-e7b06a13d7c8" containerID="d95cbb2b02673d1e482e88fb1f958a49dd836adcc5f91fd9b5a4e458ed89eafa" exitCode=143 Nov 25 07:05:48 crc kubenswrapper[4482]: I1125 07:05:48.028557 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83225fe0-ff09-448e-be31-e7b06a13d7c8","Type":"ContainerDied","Data":"d95cbb2b02673d1e482e88fb1f958a49dd836adcc5f91fd9b5a4e458ed89eafa"} Nov 25 07:05:48 crc kubenswrapper[4482]: I1125 07:05:48.030672 4482 generic.go:334] "Generic (PLEG): container finished" podID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerID="a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669" exitCode=0 Nov 25 07:05:48 crc kubenswrapper[4482]: I1125 07:05:48.030694 4482 generic.go:334] "Generic (PLEG): container finished" podID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerID="1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106" exitCode=2 Nov 25 07:05:48 crc kubenswrapper[4482]: I1125 07:05:48.030702 4482 generic.go:334] "Generic (PLEG): container finished" podID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerID="37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898" exitCode=0 Nov 25 07:05:48 crc kubenswrapper[4482]: I1125 07:05:48.031522 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6","Type":"ContainerDied","Data":"a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669"} Nov 25 07:05:48 crc kubenswrapper[4482]: I1125 07:05:48.031547 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6","Type":"ContainerDied","Data":"1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106"} Nov 25 07:05:48 crc kubenswrapper[4482]: I1125 07:05:48.031557 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6","Type":"ContainerDied","Data":"37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898"} Nov 25 07:05:48 crc kubenswrapper[4482]: I1125 07:05:48.350706 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:48 crc kubenswrapper[4482]: I1125 07:05:48.386936 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.499080 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.619041 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-combined-ca-bundle\") pod \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.619086 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-config-data\") pod \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.619142 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-scripts\") pod \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.619242 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-log-httpd\") pod \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.619373 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-sg-core-conf-yaml\") pod \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.619426 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-run-httpd\") pod \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.619454 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssbjg\" (UniqueName: \"kubernetes.io/projected/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-kube-api-access-ssbjg\") pod \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.619596 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-ceilometer-tls-certs\") pod \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\" (UID: \"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6\") " Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.625938 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" (UID: "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.626929 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" (UID: "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.641792 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-scripts" (OuterVolumeSpecName: "scripts") pod "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" (UID: "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.650034 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-kube-api-access-ssbjg" (OuterVolumeSpecName: "kube-api-access-ssbjg") pod "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" (UID: "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6"). InnerVolumeSpecName "kube-api-access-ssbjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.704380 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" (UID: "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.723028 4482 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.723063 4482 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.723075 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssbjg\" (UniqueName: \"kubernetes.io/projected/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-kube-api-access-ssbjg\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.723087 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.723097 4482 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.725231 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" (UID: "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.755298 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-config-data" (OuterVolumeSpecName: "config-data") pod "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" (UID: "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.755466 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" (UID: "ba060e8f-2208-45a2-8d0b-cd7e2172d6b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.825214 4482 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.825318 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:50 crc kubenswrapper[4482]: I1125 07:05:50.825375 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.060396 4482 generic.go:334] "Generic (PLEG): container finished" podID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerID="85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c" exitCode=0 Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.060488 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6","Type":"ContainerDied","Data":"85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c"} Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.060499 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.060569 4482 scope.go:117] "RemoveContainer" containerID="a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.060551 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba060e8f-2208-45a2-8d0b-cd7e2172d6b6","Type":"ContainerDied","Data":"943a4659d50064d2813a6d0259b1e3b7c1970ef7da0c348a486c5694d6ef5f56"} Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.063290 4482 generic.go:334] "Generic (PLEG): container finished" podID="83225fe0-ff09-448e-be31-e7b06a13d7c8" containerID="b632122a419bb25bca1c35ac87843ab477aca2e95ff263c997164a3336b9a384" exitCode=0 Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.063334 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83225fe0-ff09-448e-be31-e7b06a13d7c8","Type":"ContainerDied","Data":"b632122a419bb25bca1c35ac87843ab477aca2e95ff263c997164a3336b9a384"} Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.084326 4482 scope.go:117] "RemoveContainer" containerID="1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.101309 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.110598 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.127377 4482 scope.go:117] "RemoveContainer" containerID="85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.130946 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:51 crc kubenswrapper[4482]: E1125 07:05:51.131408 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="ceilometer-central-agent" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.131426 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="ceilometer-central-agent" Nov 25 07:05:51 crc kubenswrapper[4482]: E1125 07:05:51.131439 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="ceilometer-notification-agent" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.131446 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="ceilometer-notification-agent" Nov 25 07:05:51 crc kubenswrapper[4482]: E1125 07:05:51.131456 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="proxy-httpd" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.131462 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="proxy-httpd" Nov 25 07:05:51 crc kubenswrapper[4482]: E1125 07:05:51.131506 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="sg-core" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.131514 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="sg-core" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.131683 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="proxy-httpd" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.131702 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="ceilometer-notification-agent" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.131712 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="ceilometer-central-agent" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.131720 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" containerName="sg-core" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.133707 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.142394 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.142583 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.142718 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.164369 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.184221 4482 scope.go:117] "RemoveContainer" containerID="37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.210242 4482 scope.go:117] "RemoveContainer" containerID="a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669" Nov 25 07:05:51 crc kubenswrapper[4482]: E1125 07:05:51.210558 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669\": container with ID starting with a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669 not found: ID does not exist" containerID="a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.210589 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669"} err="failed to get container status \"a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669\": rpc error: code = NotFound desc = could not find container \"a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669\": container with ID starting with a23908041bf463ff6c5fba269cfbdb5bfa15a92649682788b81e941532656669 not found: ID does not exist" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.210612 4482 scope.go:117] "RemoveContainer" containerID="1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106" Nov 25 07:05:51 crc kubenswrapper[4482]: E1125 07:05:51.210813 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106\": container with ID starting with 1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106 not found: ID does not exist" containerID="1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.210836 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106"} err="failed to get container status \"1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106\": rpc error: code = NotFound desc = could not find container \"1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106\": container with ID starting with 1957114b084b2a99446b2d0c12e8b727f69ba14d0829e4216d3a9c138baa8106 not found: ID does not exist" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.210852 4482 scope.go:117] "RemoveContainer" containerID="85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c" Nov 25 07:05:51 crc kubenswrapper[4482]: E1125 07:05:51.211032 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c\": container with ID starting with 85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c not found: ID does not exist" containerID="85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.211053 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c"} err="failed to get container status \"85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c\": rpc error: code = NotFound desc = could not find container \"85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c\": container with ID starting with 85691be0041f02c4287fc7550d92cc9127b98d31d214dd7d0fdd6066f102571c not found: ID does not exist" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.211066 4482 scope.go:117] "RemoveContainer" containerID="37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898" Nov 25 07:05:51 crc kubenswrapper[4482]: E1125 07:05:51.211291 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898\": container with ID starting with 37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898 not found: ID does not exist" containerID="37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.211312 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898"} err="failed to get container status \"37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898\": rpc error: code = NotFound desc = could not find container \"37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898\": container with ID starting with 37c16d803bfb39fd5c6b81dd417763d584573d3fefa6a80ad16638d3b1b48898 not found: ID does not exist" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.234577 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2749f54f-b981-481f-9304-2f83ab6be1e8-run-httpd\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.234624 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.234659 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.234685 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2749f54f-b981-481f-9304-2f83ab6be1e8-log-httpd\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.234744 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.234914 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzpl9\" (UniqueName: \"kubernetes.io/projected/2749f54f-b981-481f-9304-2f83ab6be1e8-kube-api-access-xzpl9\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.234971 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-config-data\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.235483 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-scripts\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.338673 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-scripts\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.338745 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2749f54f-b981-481f-9304-2f83ab6be1e8-run-httpd\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.338778 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.338826 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.338853 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2749f54f-b981-481f-9304-2f83ab6be1e8-log-httpd\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.338878 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.338909 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzpl9\" (UniqueName: \"kubernetes.io/projected/2749f54f-b981-481f-9304-2f83ab6be1e8-kube-api-access-xzpl9\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.338934 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-config-data\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.340588 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2749f54f-b981-481f-9304-2f83ab6be1e8-run-httpd\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.341867 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2749f54f-b981-481f-9304-2f83ab6be1e8-log-httpd\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.347035 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.358971 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.360618 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.365229 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzpl9\" (UniqueName: \"kubernetes.io/projected/2749f54f-b981-481f-9304-2f83ab6be1e8-kube-api-access-xzpl9\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.371562 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-scripts\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.375226 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2749f54f-b981-481f-9304-2f83ab6be1e8-config-data\") pod \"ceilometer-0\" (UID: \"2749f54f-b981-481f-9304-2f83ab6be1e8\") " pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.491402 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.498457 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.550704 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83225fe0-ff09-448e-be31-e7b06a13d7c8-config-data\") pod \"83225fe0-ff09-448e-be31-e7b06a13d7c8\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.550760 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83225fe0-ff09-448e-be31-e7b06a13d7c8-logs\") pod \"83225fe0-ff09-448e-be31-e7b06a13d7c8\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.550836 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83225fe0-ff09-448e-be31-e7b06a13d7c8-combined-ca-bundle\") pod \"83225fe0-ff09-448e-be31-e7b06a13d7c8\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.550913 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74xqq\" (UniqueName: \"kubernetes.io/projected/83225fe0-ff09-448e-be31-e7b06a13d7c8-kube-api-access-74xqq\") pod \"83225fe0-ff09-448e-be31-e7b06a13d7c8\" (UID: \"83225fe0-ff09-448e-be31-e7b06a13d7c8\") " Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.553820 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83225fe0-ff09-448e-be31-e7b06a13d7c8-logs" (OuterVolumeSpecName: "logs") pod "83225fe0-ff09-448e-be31-e7b06a13d7c8" (UID: "83225fe0-ff09-448e-be31-e7b06a13d7c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.569430 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83225fe0-ff09-448e-be31-e7b06a13d7c8-kube-api-access-74xqq" (OuterVolumeSpecName: "kube-api-access-74xqq") pod "83225fe0-ff09-448e-be31-e7b06a13d7c8" (UID: "83225fe0-ff09-448e-be31-e7b06a13d7c8"). InnerVolumeSpecName "kube-api-access-74xqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.607991 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83225fe0-ff09-448e-be31-e7b06a13d7c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83225fe0-ff09-448e-be31-e7b06a13d7c8" (UID: "83225fe0-ff09-448e-be31-e7b06a13d7c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.609775 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83225fe0-ff09-448e-be31-e7b06a13d7c8-config-data" (OuterVolumeSpecName: "config-data") pod "83225fe0-ff09-448e-be31-e7b06a13d7c8" (UID: "83225fe0-ff09-448e-be31-e7b06a13d7c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.653692 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83225fe0-ff09-448e-be31-e7b06a13d7c8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.653729 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83225fe0-ff09-448e-be31-e7b06a13d7c8-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.653741 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83225fe0-ff09-448e-be31-e7b06a13d7c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.653754 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74xqq\" (UniqueName: \"kubernetes.io/projected/83225fe0-ff09-448e-be31-e7b06a13d7c8-kube-api-access-74xqq\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:51 crc kubenswrapper[4482]: I1125 07:05:51.847387 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba060e8f-2208-45a2-8d0b-cd7e2172d6b6" path="/var/lib/kubelet/pods/ba060e8f-2208-45a2-8d0b-cd7e2172d6b6/volumes" Nov 25 07:05:52 crc kubenswrapper[4482]: W1125 07:05:52.053330 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2749f54f_b981_481f_9304_2f83ab6be1e8.slice/crio-ead5c88f1d968a67aa46b9582b960ce85a787d260d55ed1b916a3b017d60c8df WatchSource:0}: Error finding container ead5c88f1d968a67aa46b9582b960ce85a787d260d55ed1b916a3b017d60c8df: Status 404 returned error can't find the container with id ead5c88f1d968a67aa46b9582b960ce85a787d260d55ed1b916a3b017d60c8df Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.053380 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.072218 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2749f54f-b981-481f-9304-2f83ab6be1e8","Type":"ContainerStarted","Data":"ead5c88f1d968a67aa46b9582b960ce85a787d260d55ed1b916a3b017d60c8df"} Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.074777 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"83225fe0-ff09-448e-be31-e7b06a13d7c8","Type":"ContainerDied","Data":"7f2f20572574ae83097fb44eb43b82ed8e02d70506bfce7da3a6541c86be5d84"} Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.074817 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.074845 4482 scope.go:117] "RemoveContainer" containerID="b632122a419bb25bca1c35ac87843ab477aca2e95ff263c997164a3336b9a384" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.106581 4482 scope.go:117] "RemoveContainer" containerID="d95cbb2b02673d1e482e88fb1f958a49dd836adcc5f91fd9b5a4e458ed89eafa" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.112367 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.123264 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.143589 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:52 crc kubenswrapper[4482]: E1125 07:05:52.144068 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83225fe0-ff09-448e-be31-e7b06a13d7c8" containerName="nova-api-log" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.144081 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="83225fe0-ff09-448e-be31-e7b06a13d7c8" containerName="nova-api-log" Nov 25 07:05:52 crc kubenswrapper[4482]: E1125 07:05:52.144108 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83225fe0-ff09-448e-be31-e7b06a13d7c8" containerName="nova-api-api" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.144117 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="83225fe0-ff09-448e-be31-e7b06a13d7c8" containerName="nova-api-api" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.144337 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="83225fe0-ff09-448e-be31-e7b06a13d7c8" containerName="nova-api-api" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.144348 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="83225fe0-ff09-448e-be31-e7b06a13d7c8" containerName="nova-api-log" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.145327 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.148556 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.148772 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.148971 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.150304 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.271090 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-config-data\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.271564 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.271682 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6fc\" (UniqueName: \"kubernetes.io/projected/899a2e61-517a-4a6c-bc18-570b1a45e71a-kube-api-access-wf6fc\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.271764 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-public-tls-certs\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.271820 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899a2e61-517a-4a6c-bc18-570b1a45e71a-logs\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.271881 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.373934 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.374348 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6fc\" (UniqueName: \"kubernetes.io/projected/899a2e61-517a-4a6c-bc18-570b1a45e71a-kube-api-access-wf6fc\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.374392 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-public-tls-certs\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.374424 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899a2e61-517a-4a6c-bc18-570b1a45e71a-logs\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.374456 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.374512 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-config-data\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.375733 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899a2e61-517a-4a6c-bc18-570b1a45e71a-logs\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.383293 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-public-tls-certs\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.383761 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.384649 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-config-data\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.384872 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.395622 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6fc\" (UniqueName: \"kubernetes.io/projected/899a2e61-517a-4a6c-bc18-570b1a45e71a-kube-api-access-wf6fc\") pod \"nova-api-0\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.465694 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:05:52 crc kubenswrapper[4482]: W1125 07:05:52.984271 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod899a2e61_517a_4a6c_bc18_570b1a45e71a.slice/crio-e5666327704aaa0a47ef6d83c7a07747f2f310ee3e2ddeeac6f3d085217b45cc WatchSource:0}: Error finding container e5666327704aaa0a47ef6d83c7a07747f2f310ee3e2ddeeac6f3d085217b45cc: Status 404 returned error can't find the container with id e5666327704aaa0a47ef6d83c7a07747f2f310ee3e2ddeeac6f3d085217b45cc Nov 25 07:05:52 crc kubenswrapper[4482]: I1125 07:05:52.986615 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:05:53 crc kubenswrapper[4482]: I1125 07:05:53.088843 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"899a2e61-517a-4a6c-bc18-570b1a45e71a","Type":"ContainerStarted","Data":"e5666327704aaa0a47ef6d83c7a07747f2f310ee3e2ddeeac6f3d085217b45cc"} Nov 25 07:05:53 crc kubenswrapper[4482]: I1125 07:05:53.090689 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2749f54f-b981-481f-9304-2f83ab6be1e8","Type":"ContainerStarted","Data":"5bf4308abf9c5597e19c7e64063af604aa80ff77f4b290bff1d2cdb9170e98cb"} Nov 25 07:05:53 crc kubenswrapper[4482]: I1125 07:05:53.351788 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:53 crc kubenswrapper[4482]: I1125 07:05:53.385953 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:53 crc kubenswrapper[4482]: I1125 07:05:53.387198 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 07:05:53 crc kubenswrapper[4482]: I1125 07:05:53.431530 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 07:05:53 crc kubenswrapper[4482]: I1125 07:05:53.838733 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83225fe0-ff09-448e-be31-e7b06a13d7c8" path="/var/lib/kubelet/pods/83225fe0-ff09-448e-be31-e7b06a13d7c8/volumes" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.103920 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"899a2e61-517a-4a6c-bc18-570b1a45e71a","Type":"ContainerStarted","Data":"8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0"} Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.103991 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"899a2e61-517a-4a6c-bc18-570b1a45e71a","Type":"ContainerStarted","Data":"3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c"} Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.107965 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2749f54f-b981-481f-9304-2f83ab6be1e8","Type":"ContainerStarted","Data":"2c3150182384fa8d762643c211a0053bf8fd839d4468b336c60cf8c83c3c4f5f"} Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.143185 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.145743 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.145725184 podStartE2EDuration="2.145725184s" podCreationTimestamp="2025-11-25 07:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:54.141386624 +0000 UTC m=+1128.629617883" watchObservedRunningTime="2025-11-25 07:05:54.145725184 +0000 UTC m=+1128.633956442" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.150022 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.306983 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-n9bvq"] Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.308231 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.310459 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.310630 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.315680 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n9bvq"] Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.450938 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-config-data\") pod \"nova-cell1-cell-mapping-n9bvq\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.451323 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-scripts\") pod \"nova-cell1-cell-mapping-n9bvq\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.451372 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhw7\" (UniqueName: \"kubernetes.io/projected/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-kube-api-access-2zhw7\") pod \"nova-cell1-cell-mapping-n9bvq\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.451398 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n9bvq\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.552469 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-config-data\") pod \"nova-cell1-cell-mapping-n9bvq\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.552547 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-scripts\") pod \"nova-cell1-cell-mapping-n9bvq\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.552592 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zhw7\" (UniqueName: \"kubernetes.io/projected/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-kube-api-access-2zhw7\") pod \"nova-cell1-cell-mapping-n9bvq\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.552616 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n9bvq\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.560108 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-scripts\") pod \"nova-cell1-cell-mapping-n9bvq\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.560750 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n9bvq\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.570889 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-config-data\") pod \"nova-cell1-cell-mapping-n9bvq\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.576613 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zhw7\" (UniqueName: \"kubernetes.io/projected/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-kube-api-access-2zhw7\") pod \"nova-cell1-cell-mapping-n9bvq\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.629161 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:05:54 crc kubenswrapper[4482]: I1125 07:05:54.893330 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:54.991001 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c75cdbd45-cj9pn"] Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:54.998726 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" podUID="19cf9dd3-f468-4483-8b4e-59a40245b45e" containerName="dnsmasq-dns" containerID="cri-o://5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294" gracePeriod=10 Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.053406 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n9bvq"] Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.138220 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n9bvq" event={"ID":"f9dd329e-7514-4dbf-9e8f-e34467fa66ab","Type":"ContainerStarted","Data":"f195a2d0933ff2b3033d04a46fd0b69652dfeea5bd83f750774f41d5a7052b10"} Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.167678 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2749f54f-b981-481f-9304-2f83ab6be1e8","Type":"ContainerStarted","Data":"32e2e0b98351fec210ffeb5b4e6ea4546afda026eb306d6f088c82f95e9cd585"} Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.659608 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.712365 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rwmg\" (UniqueName: \"kubernetes.io/projected/19cf9dd3-f468-4483-8b4e-59a40245b45e-kube-api-access-8rwmg\") pod \"19cf9dd3-f468-4483-8b4e-59a40245b45e\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.712623 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-ovsdbserver-nb\") pod \"19cf9dd3-f468-4483-8b4e-59a40245b45e\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.712715 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-dns-svc\") pod \"19cf9dd3-f468-4483-8b4e-59a40245b45e\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.712771 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-ovsdbserver-sb\") pod \"19cf9dd3-f468-4483-8b4e-59a40245b45e\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.712816 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-config\") pod \"19cf9dd3-f468-4483-8b4e-59a40245b45e\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.712881 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-dns-swift-storage-0\") pod \"19cf9dd3-f468-4483-8b4e-59a40245b45e\" (UID: \"19cf9dd3-f468-4483-8b4e-59a40245b45e\") " Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.724257 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19cf9dd3-f468-4483-8b4e-59a40245b45e-kube-api-access-8rwmg" (OuterVolumeSpecName: "kube-api-access-8rwmg") pod "19cf9dd3-f468-4483-8b4e-59a40245b45e" (UID: "19cf9dd3-f468-4483-8b4e-59a40245b45e"). InnerVolumeSpecName "kube-api-access-8rwmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.760204 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "19cf9dd3-f468-4483-8b4e-59a40245b45e" (UID: "19cf9dd3-f468-4483-8b4e-59a40245b45e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.766793 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "19cf9dd3-f468-4483-8b4e-59a40245b45e" (UID: "19cf9dd3-f468-4483-8b4e-59a40245b45e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.768581 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-config" (OuterVolumeSpecName: "config") pod "19cf9dd3-f468-4483-8b4e-59a40245b45e" (UID: "19cf9dd3-f468-4483-8b4e-59a40245b45e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.777688 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "19cf9dd3-f468-4483-8b4e-59a40245b45e" (UID: "19cf9dd3-f468-4483-8b4e-59a40245b45e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.783862 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "19cf9dd3-f468-4483-8b4e-59a40245b45e" (UID: "19cf9dd3-f468-4483-8b4e-59a40245b45e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.815430 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.815465 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.815475 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.815484 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.815492 4482 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19cf9dd3-f468-4483-8b4e-59a40245b45e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:55 crc kubenswrapper[4482]: I1125 07:05:55.815504 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rwmg\" (UniqueName: \"kubernetes.io/projected/19cf9dd3-f468-4483-8b4e-59a40245b45e-kube-api-access-8rwmg\") on node \"crc\" DevicePath \"\"" Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.169524 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n9bvq" event={"ID":"f9dd329e-7514-4dbf-9e8f-e34467fa66ab","Type":"ContainerStarted","Data":"3132db4959392cd254b5a13deb1af5c3b426f3737606912f1d136bc6c5461ae5"} Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.173390 4482 generic.go:334] "Generic (PLEG): container finished" podID="19cf9dd3-f468-4483-8b4e-59a40245b45e" containerID="5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294" exitCode=0 Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.173434 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" event={"ID":"19cf9dd3-f468-4483-8b4e-59a40245b45e","Type":"ContainerDied","Data":"5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294"} Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.173788 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" event={"ID":"19cf9dd3-f468-4483-8b4e-59a40245b45e","Type":"ContainerDied","Data":"69f026a05069bafa56fc4a8424c0743917372974f61d06c5915cdb67185e2011"} Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.173833 4482 scope.go:117] "RemoveContainer" containerID="5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294" Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.173536 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c75cdbd45-cj9pn" Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.204598 4482 scope.go:117] "RemoveContainer" containerID="bdf45347ddc47e44764c69fdd6a4d53af10c6bda3c63f7eb0460369fc2b81490" Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.206074 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-n9bvq" podStartSLOduration=2.20605603 podStartE2EDuration="2.20605603s" podCreationTimestamp="2025-11-25 07:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:05:56.202141179 +0000 UTC m=+1130.690372438" watchObservedRunningTime="2025-11-25 07:05:56.20605603 +0000 UTC m=+1130.694287288" Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.240632 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c75cdbd45-cj9pn"] Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.251732 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c75cdbd45-cj9pn"] Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.292272 4482 scope.go:117] "RemoveContainer" containerID="5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294" Nov 25 07:05:56 crc kubenswrapper[4482]: E1125 07:05:56.292778 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294\": container with ID starting with 5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294 not found: ID does not exist" containerID="5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294" Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.292826 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294"} err="failed to get container status \"5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294\": rpc error: code = NotFound desc = could not find container \"5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294\": container with ID starting with 5dccc611decd232cbbe6c6170f01eaa38b90ae02a10213c0a504c68d2a1ee294 not found: ID does not exist" Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.292845 4482 scope.go:117] "RemoveContainer" containerID="bdf45347ddc47e44764c69fdd6a4d53af10c6bda3c63f7eb0460369fc2b81490" Nov 25 07:05:56 crc kubenswrapper[4482]: E1125 07:05:56.293081 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdf45347ddc47e44764c69fdd6a4d53af10c6bda3c63f7eb0460369fc2b81490\": container with ID starting with bdf45347ddc47e44764c69fdd6a4d53af10c6bda3c63f7eb0460369fc2b81490 not found: ID does not exist" containerID="bdf45347ddc47e44764c69fdd6a4d53af10c6bda3c63f7eb0460369fc2b81490" Nov 25 07:05:56 crc kubenswrapper[4482]: I1125 07:05:56.293105 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdf45347ddc47e44764c69fdd6a4d53af10c6bda3c63f7eb0460369fc2b81490"} err="failed to get container status \"bdf45347ddc47e44764c69fdd6a4d53af10c6bda3c63f7eb0460369fc2b81490\": rpc error: code = NotFound desc = could not find container \"bdf45347ddc47e44764c69fdd6a4d53af10c6bda3c63f7eb0460369fc2b81490\": container with ID starting with bdf45347ddc47e44764c69fdd6a4d53af10c6bda3c63f7eb0460369fc2b81490 not found: ID does not exist" Nov 25 07:05:57 crc kubenswrapper[4482]: I1125 07:05:57.187884 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2749f54f-b981-481f-9304-2f83ab6be1e8","Type":"ContainerStarted","Data":"a2f663236d65f0cf1628e75acceafe610d2325faa818c35dbd1f859a657e410a"} Nov 25 07:05:57 crc kubenswrapper[4482]: I1125 07:05:57.188960 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 07:05:57 crc kubenswrapper[4482]: I1125 07:05:57.208988 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.090143735 podStartE2EDuration="6.208965338s" podCreationTimestamp="2025-11-25 07:05:51 +0000 UTC" firstStartedPulling="2025-11-25 07:05:52.056526255 +0000 UTC m=+1126.544757514" lastFinishedPulling="2025-11-25 07:05:56.175347858 +0000 UTC m=+1130.663579117" observedRunningTime="2025-11-25 07:05:57.206009788 +0000 UTC m=+1131.694241047" watchObservedRunningTime="2025-11-25 07:05:57.208965338 +0000 UTC m=+1131.697196597" Nov 25 07:05:57 crc kubenswrapper[4482]: I1125 07:05:57.843419 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19cf9dd3-f468-4483-8b4e-59a40245b45e" path="/var/lib/kubelet/pods/19cf9dd3-f468-4483-8b4e-59a40245b45e/volumes" Nov 25 07:06:00 crc kubenswrapper[4482]: I1125 07:06:00.236523 4482 generic.go:334] "Generic (PLEG): container finished" podID="f9dd329e-7514-4dbf-9e8f-e34467fa66ab" containerID="3132db4959392cd254b5a13deb1af5c3b426f3737606912f1d136bc6c5461ae5" exitCode=0 Nov 25 07:06:00 crc kubenswrapper[4482]: I1125 07:06:00.236894 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n9bvq" event={"ID":"f9dd329e-7514-4dbf-9e8f-e34467fa66ab","Type":"ContainerDied","Data":"3132db4959392cd254b5a13deb1af5c3b426f3737606912f1d136bc6c5461ae5"} Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.532140 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.559068 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-combined-ca-bundle\") pod \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.559131 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zhw7\" (UniqueName: \"kubernetes.io/projected/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-kube-api-access-2zhw7\") pod \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.559313 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-config-data\") pod \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.559361 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-scripts\") pod \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\" (UID: \"f9dd329e-7514-4dbf-9e8f-e34467fa66ab\") " Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.567238 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-scripts" (OuterVolumeSpecName: "scripts") pod "f9dd329e-7514-4dbf-9e8f-e34467fa66ab" (UID: "f9dd329e-7514-4dbf-9e8f-e34467fa66ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.568280 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-kube-api-access-2zhw7" (OuterVolumeSpecName: "kube-api-access-2zhw7") pod "f9dd329e-7514-4dbf-9e8f-e34467fa66ab" (UID: "f9dd329e-7514-4dbf-9e8f-e34467fa66ab"). InnerVolumeSpecName "kube-api-access-2zhw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.585767 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9dd329e-7514-4dbf-9e8f-e34467fa66ab" (UID: "f9dd329e-7514-4dbf-9e8f-e34467fa66ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.588042 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-config-data" (OuterVolumeSpecName: "config-data") pod "f9dd329e-7514-4dbf-9e8f-e34467fa66ab" (UID: "f9dd329e-7514-4dbf-9e8f-e34467fa66ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.663758 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.663801 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zhw7\" (UniqueName: \"kubernetes.io/projected/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-kube-api-access-2zhw7\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.663829 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:01 crc kubenswrapper[4482]: I1125 07:06:01.663838 4482 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9dd329e-7514-4dbf-9e8f-e34467fa66ab-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:02 crc kubenswrapper[4482]: I1125 07:06:02.262228 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n9bvq" event={"ID":"f9dd329e-7514-4dbf-9e8f-e34467fa66ab","Type":"ContainerDied","Data":"f195a2d0933ff2b3033d04a46fd0b69652dfeea5bd83f750774f41d5a7052b10"} Nov 25 07:06:02 crc kubenswrapper[4482]: I1125 07:06:02.262632 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f195a2d0933ff2b3033d04a46fd0b69652dfeea5bd83f750774f41d5a7052b10" Nov 25 07:06:02 crc kubenswrapper[4482]: I1125 07:06:02.262296 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n9bvq" Nov 25 07:06:02 crc kubenswrapper[4482]: I1125 07:06:02.438241 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:06:02 crc kubenswrapper[4482]: I1125 07:06:02.438491 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="899a2e61-517a-4a6c-bc18-570b1a45e71a" containerName="nova-api-log" containerID="cri-o://3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c" gracePeriod=30 Nov 25 07:06:02 crc kubenswrapper[4482]: I1125 07:06:02.438542 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="899a2e61-517a-4a6c-bc18-570b1a45e71a" containerName="nova-api-api" containerID="cri-o://8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0" gracePeriod=30 Nov 25 07:06:02 crc kubenswrapper[4482]: I1125 07:06:02.453751 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:06:02 crc kubenswrapper[4482]: I1125 07:06:02.454004 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="827243a4-101f-49ab-8219-24fae0a7ea82" containerName="nova-scheduler-scheduler" containerID="cri-o://38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa" gracePeriod=30 Nov 25 07:06:02 crc kubenswrapper[4482]: I1125 07:06:02.480033 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:06:02 crc kubenswrapper[4482]: I1125 07:06:02.480440 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="67bf34f2-664a-4065-88a4-115114e4d445" containerName="nova-metadata-metadata" containerID="cri-o://778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6" gracePeriod=30 Nov 25 07:06:02 crc kubenswrapper[4482]: I1125 07:06:02.480288 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="67bf34f2-664a-4065-88a4-115114e4d445" containerName="nova-metadata-log" containerID="cri-o://fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867" gracePeriod=30 Nov 25 07:06:02 crc kubenswrapper[4482]: I1125 07:06:02.990745 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.123872 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-internal-tls-certs\") pod \"899a2e61-517a-4a6c-bc18-570b1a45e71a\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.123964 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6fc\" (UniqueName: \"kubernetes.io/projected/899a2e61-517a-4a6c-bc18-570b1a45e71a-kube-api-access-wf6fc\") pod \"899a2e61-517a-4a6c-bc18-570b1a45e71a\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.124159 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-public-tls-certs\") pod \"899a2e61-517a-4a6c-bc18-570b1a45e71a\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.124219 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-config-data\") pod \"899a2e61-517a-4a6c-bc18-570b1a45e71a\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.124288 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-combined-ca-bundle\") pod \"899a2e61-517a-4a6c-bc18-570b1a45e71a\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.124371 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899a2e61-517a-4a6c-bc18-570b1a45e71a-logs\") pod \"899a2e61-517a-4a6c-bc18-570b1a45e71a\" (UID: \"899a2e61-517a-4a6c-bc18-570b1a45e71a\") " Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.125312 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899a2e61-517a-4a6c-bc18-570b1a45e71a-logs" (OuterVolumeSpecName: "logs") pod "899a2e61-517a-4a6c-bc18-570b1a45e71a" (UID: "899a2e61-517a-4a6c-bc18-570b1a45e71a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.132492 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899a2e61-517a-4a6c-bc18-570b1a45e71a-kube-api-access-wf6fc" (OuterVolumeSpecName: "kube-api-access-wf6fc") pod "899a2e61-517a-4a6c-bc18-570b1a45e71a" (UID: "899a2e61-517a-4a6c-bc18-570b1a45e71a"). InnerVolumeSpecName "kube-api-access-wf6fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.151430 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-config-data" (OuterVolumeSpecName: "config-data") pod "899a2e61-517a-4a6c-bc18-570b1a45e71a" (UID: "899a2e61-517a-4a6c-bc18-570b1a45e71a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.152551 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "899a2e61-517a-4a6c-bc18-570b1a45e71a" (UID: "899a2e61-517a-4a6c-bc18-570b1a45e71a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.167667 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "899a2e61-517a-4a6c-bc18-570b1a45e71a" (UID: "899a2e61-517a-4a6c-bc18-570b1a45e71a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.171680 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "899a2e61-517a-4a6c-bc18-570b1a45e71a" (UID: "899a2e61-517a-4a6c-bc18-570b1a45e71a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.229619 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.229698 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899a2e61-517a-4a6c-bc18-570b1a45e71a-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.229760 4482 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.229811 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf6fc\" (UniqueName: \"kubernetes.io/projected/899a2e61-517a-4a6c-bc18-570b1a45e71a-kube-api-access-wf6fc\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.229882 4482 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.229925 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899a2e61-517a-4a6c-bc18-570b1a45e71a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.274746 4482 generic.go:334] "Generic (PLEG): container finished" podID="67bf34f2-664a-4065-88a4-115114e4d445" containerID="fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867" exitCode=143 Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.274844 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67bf34f2-664a-4065-88a4-115114e4d445","Type":"ContainerDied","Data":"fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867"} Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.277261 4482 generic.go:334] "Generic (PLEG): container finished" podID="899a2e61-517a-4a6c-bc18-570b1a45e71a" containerID="8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0" exitCode=0 Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.277311 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"899a2e61-517a-4a6c-bc18-570b1a45e71a","Type":"ContainerDied","Data":"8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0"} Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.277380 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"899a2e61-517a-4a6c-bc18-570b1a45e71a","Type":"ContainerDied","Data":"3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c"} Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.277405 4482 scope.go:117] "RemoveContainer" containerID="8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.277325 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.277346 4482 generic.go:334] "Generic (PLEG): container finished" podID="899a2e61-517a-4a6c-bc18-570b1a45e71a" containerID="3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c" exitCode=143 Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.277594 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"899a2e61-517a-4a6c-bc18-570b1a45e71a","Type":"ContainerDied","Data":"e5666327704aaa0a47ef6d83c7a07747f2f310ee3e2ddeeac6f3d085217b45cc"} Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.307749 4482 scope.go:117] "RemoveContainer" containerID="3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.319275 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.326259 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.351454 4482 scope.go:117] "RemoveContainer" containerID="8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.353727 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 07:06:03 crc kubenswrapper[4482]: E1125 07:06:03.354364 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19cf9dd3-f468-4483-8b4e-59a40245b45e" containerName="init" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.354386 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="19cf9dd3-f468-4483-8b4e-59a40245b45e" containerName="init" Nov 25 07:06:03 crc kubenswrapper[4482]: E1125 07:06:03.354402 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19cf9dd3-f468-4483-8b4e-59a40245b45e" containerName="dnsmasq-dns" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.354408 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="19cf9dd3-f468-4483-8b4e-59a40245b45e" containerName="dnsmasq-dns" Nov 25 07:06:03 crc kubenswrapper[4482]: E1125 07:06:03.354422 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899a2e61-517a-4a6c-bc18-570b1a45e71a" containerName="nova-api-log" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.354427 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="899a2e61-517a-4a6c-bc18-570b1a45e71a" containerName="nova-api-log" Nov 25 07:06:03 crc kubenswrapper[4482]: E1125 07:06:03.354441 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899a2e61-517a-4a6c-bc18-570b1a45e71a" containerName="nova-api-api" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.354447 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="899a2e61-517a-4a6c-bc18-570b1a45e71a" containerName="nova-api-api" Nov 25 07:06:03 crc kubenswrapper[4482]: E1125 07:06:03.354458 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9dd329e-7514-4dbf-9e8f-e34467fa66ab" containerName="nova-manage" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.354464 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9dd329e-7514-4dbf-9e8f-e34467fa66ab" containerName="nova-manage" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.354636 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9dd329e-7514-4dbf-9e8f-e34467fa66ab" containerName="nova-manage" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.354658 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="899a2e61-517a-4a6c-bc18-570b1a45e71a" containerName="nova-api-log" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.354672 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="899a2e61-517a-4a6c-bc18-570b1a45e71a" containerName="nova-api-api" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.354683 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="19cf9dd3-f468-4483-8b4e-59a40245b45e" containerName="dnsmasq-dns" Nov 25 07:06:03 crc kubenswrapper[4482]: E1125 07:06:03.357090 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0\": container with ID starting with 8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0 not found: ID does not exist" containerID="8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.357140 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0"} err="failed to get container status \"8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0\": rpc error: code = NotFound desc = could not find container \"8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0\": container with ID starting with 8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0 not found: ID does not exist" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.357185 4482 scope.go:117] "RemoveContainer" containerID="3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.357706 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: E1125 07:06:03.358085 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c\": container with ID starting with 3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c not found: ID does not exist" containerID="3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.358115 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c"} err="failed to get container status \"3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c\": rpc error: code = NotFound desc = could not find container \"3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c\": container with ID starting with 3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c not found: ID does not exist" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.358138 4482 scope.go:117] "RemoveContainer" containerID="8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.361229 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.361231 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.361435 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0"} err="failed to get container status \"8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0\": rpc error: code = NotFound desc = could not find container \"8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0\": container with ID starting with 8b839ac79c001fd6de0b5a9fa53e46456a89f7fb489e074dcf68a20502188ce0 not found: ID does not exist" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.361484 4482 scope.go:117] "RemoveContainer" containerID="3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.362292 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.363074 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c"} err="failed to get container status \"3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c\": rpc error: code = NotFound desc = could not find container \"3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c\": container with ID starting with 3e36e8691d445519ff9654622bce70a0d47b4ed3f887f2ac6038189f18e2263c not found: ID does not exist" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.368656 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:06:03 crc kubenswrapper[4482]: E1125 07:06:03.408097 4482 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 07:06:03 crc kubenswrapper[4482]: E1125 07:06:03.411117 4482 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 07:06:03 crc kubenswrapper[4482]: E1125 07:06:03.412763 4482 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 07:06:03 crc kubenswrapper[4482]: E1125 07:06:03.412845 4482 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="827243a4-101f-49ab-8219-24fae0a7ea82" containerName="nova-scheduler-scheduler" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.435800 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced93f5f-b004-4734-b912-3510890d217c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.435867 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced93f5f-b004-4734-b912-3510890d217c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.436003 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5ccx\" (UniqueName: \"kubernetes.io/projected/ced93f5f-b004-4734-b912-3510890d217c-kube-api-access-j5ccx\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.436067 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced93f5f-b004-4734-b912-3510890d217c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.436139 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ced93f5f-b004-4734-b912-3510890d217c-logs\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.436414 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced93f5f-b004-4734-b912-3510890d217c-config-data\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.537864 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced93f5f-b004-4734-b912-3510890d217c-config-data\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.538458 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced93f5f-b004-4734-b912-3510890d217c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.538550 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced93f5f-b004-4734-b912-3510890d217c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.538692 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5ccx\" (UniqueName: \"kubernetes.io/projected/ced93f5f-b004-4734-b912-3510890d217c-kube-api-access-j5ccx\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.538803 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced93f5f-b004-4734-b912-3510890d217c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.538944 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ced93f5f-b004-4734-b912-3510890d217c-logs\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.539415 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ced93f5f-b004-4734-b912-3510890d217c-logs\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.541626 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced93f5f-b004-4734-b912-3510890d217c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.541655 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced93f5f-b004-4734-b912-3510890d217c-config-data\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.541830 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced93f5f-b004-4734-b912-3510890d217c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.542159 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced93f5f-b004-4734-b912-3510890d217c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.556544 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5ccx\" (UniqueName: \"kubernetes.io/projected/ced93f5f-b004-4734-b912-3510890d217c-kube-api-access-j5ccx\") pod \"nova-api-0\" (UID: \"ced93f5f-b004-4734-b912-3510890d217c\") " pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.674633 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:06:03 crc kubenswrapper[4482]: I1125 07:06:03.844134 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899a2e61-517a-4a6c-bc18-570b1a45e71a" path="/var/lib/kubelet/pods/899a2e61-517a-4a6c-bc18-570b1a45e71a/volumes" Nov 25 07:06:04 crc kubenswrapper[4482]: I1125 07:06:04.130779 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:06:04 crc kubenswrapper[4482]: I1125 07:06:04.292581 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ced93f5f-b004-4734-b912-3510890d217c","Type":"ContainerStarted","Data":"db3a6f0aa13867db1e6ae6623b758f859d3581cacfa05fc5b18349618b38039f"} Nov 25 07:06:04 crc kubenswrapper[4482]: I1125 07:06:04.292643 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ced93f5f-b004-4734-b912-3510890d217c","Type":"ContainerStarted","Data":"d24f4a0fd0a646e42f62df193fa0d66722e302fd3d9942d4ab8660ed3a620a90"} Nov 25 07:06:05 crc kubenswrapper[4482]: I1125 07:06:05.312677 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ced93f5f-b004-4734-b912-3510890d217c","Type":"ContainerStarted","Data":"6bc6ac0d93fb9072c4e5163c621abb6367d62917ce8a726749582f2f79457109"} Nov 25 07:06:05 crc kubenswrapper[4482]: I1125 07:06:05.331089 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.331073679 podStartE2EDuration="2.331073679s" podCreationTimestamp="2025-11-25 07:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:06:05.328673112 +0000 UTC m=+1139.816904361" watchObservedRunningTime="2025-11-25 07:06:05.331073679 +0000 UTC m=+1139.819304938" Nov 25 07:06:05 crc kubenswrapper[4482]: I1125 07:06:05.621081 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="67bf34f2-664a-4065-88a4-115114e4d445" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": read tcp 10.217.0.2:54410->10.217.0.206:8775: read: connection reset by peer" Nov 25 07:06:05 crc kubenswrapper[4482]: I1125 07:06:05.621452 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="67bf34f2-664a-4065-88a4-115114e4d445" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": read tcp 10.217.0.2:54406->10.217.0.206:8775: read: connection reset by peer" Nov 25 07:06:05 crc kubenswrapper[4482]: I1125 07:06:05.995085 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.115287 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-combined-ca-bundle\") pod \"67bf34f2-664a-4065-88a4-115114e4d445\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.115813 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67bf34f2-664a-4065-88a4-115114e4d445-logs\") pod \"67bf34f2-664a-4065-88a4-115114e4d445\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.115921 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-nova-metadata-tls-certs\") pod \"67bf34f2-664a-4065-88a4-115114e4d445\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.116014 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-config-data\") pod \"67bf34f2-664a-4065-88a4-115114e4d445\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.116120 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn25c\" (UniqueName: \"kubernetes.io/projected/67bf34f2-664a-4065-88a4-115114e4d445-kube-api-access-wn25c\") pod \"67bf34f2-664a-4065-88a4-115114e4d445\" (UID: \"67bf34f2-664a-4065-88a4-115114e4d445\") " Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.117535 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67bf34f2-664a-4065-88a4-115114e4d445-logs" (OuterVolumeSpecName: "logs") pod "67bf34f2-664a-4065-88a4-115114e4d445" (UID: "67bf34f2-664a-4065-88a4-115114e4d445"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.147453 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67bf34f2-664a-4065-88a4-115114e4d445" (UID: "67bf34f2-664a-4065-88a4-115114e4d445"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.150378 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67bf34f2-664a-4065-88a4-115114e4d445-kube-api-access-wn25c" (OuterVolumeSpecName: "kube-api-access-wn25c") pod "67bf34f2-664a-4065-88a4-115114e4d445" (UID: "67bf34f2-664a-4065-88a4-115114e4d445"). InnerVolumeSpecName "kube-api-access-wn25c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.158126 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-config-data" (OuterVolumeSpecName: "config-data") pod "67bf34f2-664a-4065-88a4-115114e4d445" (UID: "67bf34f2-664a-4065-88a4-115114e4d445"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.184006 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "67bf34f2-664a-4065-88a4-115114e4d445" (UID: "67bf34f2-664a-4065-88a4-115114e4d445"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.221096 4482 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67bf34f2-664a-4065-88a4-115114e4d445-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.221148 4482 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.221164 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.221201 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn25c\" (UniqueName: \"kubernetes.io/projected/67bf34f2-664a-4065-88a4-115114e4d445-kube-api-access-wn25c\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.221212 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67bf34f2-664a-4065-88a4-115114e4d445-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.325723 4482 generic.go:334] "Generic (PLEG): container finished" podID="67bf34f2-664a-4065-88a4-115114e4d445" containerID="778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6" exitCode=0 Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.327023 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.332349 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67bf34f2-664a-4065-88a4-115114e4d445","Type":"ContainerDied","Data":"778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6"} Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.332437 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67bf34f2-664a-4065-88a4-115114e4d445","Type":"ContainerDied","Data":"531965f33bf74458edf889548833a21588fa3654b56b5cee164b0825dd4ab4dc"} Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.332484 4482 scope.go:117] "RemoveContainer" containerID="778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.377988 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.391461 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.406472 4482 scope.go:117] "RemoveContainer" containerID="fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.442228 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:06:06 crc kubenswrapper[4482]: E1125 07:06:06.442678 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67bf34f2-664a-4065-88a4-115114e4d445" containerName="nova-metadata-log" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.442697 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="67bf34f2-664a-4065-88a4-115114e4d445" containerName="nova-metadata-log" Nov 25 07:06:06 crc kubenswrapper[4482]: E1125 07:06:06.442721 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67bf34f2-664a-4065-88a4-115114e4d445" containerName="nova-metadata-metadata" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.442727 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="67bf34f2-664a-4065-88a4-115114e4d445" containerName="nova-metadata-metadata" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.442902 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="67bf34f2-664a-4065-88a4-115114e4d445" containerName="nova-metadata-log" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.442926 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="67bf34f2-664a-4065-88a4-115114e4d445" containerName="nova-metadata-metadata" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.443907 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.452521 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.452680 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.453801 4482 scope.go:117] "RemoveContainer" containerID="778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6" Nov 25 07:06:06 crc kubenswrapper[4482]: E1125 07:06:06.456863 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6\": container with ID starting with 778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6 not found: ID does not exist" containerID="778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.456949 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6"} err="failed to get container status \"778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6\": rpc error: code = NotFound desc = could not find container \"778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6\": container with ID starting with 778e7aca03a25d5522ede02fae61c1a2273350f01d46e2e4709f6ec08c7d04e6 not found: ID does not exist" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.457020 4482 scope.go:117] "RemoveContainer" containerID="fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.463753 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:06:06 crc kubenswrapper[4482]: E1125 07:06:06.465654 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867\": container with ID starting with fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867 not found: ID does not exist" containerID="fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.465688 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867"} err="failed to get container status \"fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867\": rpc error: code = NotFound desc = could not find container \"fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867\": container with ID starting with fdbff5c839b6c054414f47bec15c1615105bef507d340b1d769f61e67c50d867 not found: ID does not exist" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.527546 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdqdc\" (UniqueName: \"kubernetes.io/projected/c519179c-8a71-441f-ae4f-ca6224e057fb-kube-api-access-gdqdc\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.527944 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c519179c-8a71-441f-ae4f-ca6224e057fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.528191 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c519179c-8a71-441f-ae4f-ca6224e057fb-logs\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.528346 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c519179c-8a71-441f-ae4f-ca6224e057fb-config-data\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.528401 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c519179c-8a71-441f-ae4f-ca6224e057fb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.630027 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c519179c-8a71-441f-ae4f-ca6224e057fb-config-data\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.630078 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c519179c-8a71-441f-ae4f-ca6224e057fb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.630214 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdqdc\" (UniqueName: \"kubernetes.io/projected/c519179c-8a71-441f-ae4f-ca6224e057fb-kube-api-access-gdqdc\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.630363 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c519179c-8a71-441f-ae4f-ca6224e057fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.630387 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c519179c-8a71-441f-ae4f-ca6224e057fb-logs\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.630787 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c519179c-8a71-441f-ae4f-ca6224e057fb-logs\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.634898 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c519179c-8a71-441f-ae4f-ca6224e057fb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.635836 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c519179c-8a71-441f-ae4f-ca6224e057fb-config-data\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.635845 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c519179c-8a71-441f-ae4f-ca6224e057fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.648795 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdqdc\" (UniqueName: \"kubernetes.io/projected/c519179c-8a71-441f-ae4f-ca6224e057fb-kube-api-access-gdqdc\") pod \"nova-metadata-0\" (UID: \"c519179c-8a71-441f-ae4f-ca6224e057fb\") " pod="openstack/nova-metadata-0" Nov 25 07:06:06 crc kubenswrapper[4482]: I1125 07:06:06.770550 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.143529 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.248342 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h6zr\" (UniqueName: \"kubernetes.io/projected/827243a4-101f-49ab-8219-24fae0a7ea82-kube-api-access-7h6zr\") pod \"827243a4-101f-49ab-8219-24fae0a7ea82\" (UID: \"827243a4-101f-49ab-8219-24fae0a7ea82\") " Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.248541 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827243a4-101f-49ab-8219-24fae0a7ea82-config-data\") pod \"827243a4-101f-49ab-8219-24fae0a7ea82\" (UID: \"827243a4-101f-49ab-8219-24fae0a7ea82\") " Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.248722 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827243a4-101f-49ab-8219-24fae0a7ea82-combined-ca-bundle\") pod \"827243a4-101f-49ab-8219-24fae0a7ea82\" (UID: \"827243a4-101f-49ab-8219-24fae0a7ea82\") " Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.253996 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827243a4-101f-49ab-8219-24fae0a7ea82-kube-api-access-7h6zr" (OuterVolumeSpecName: "kube-api-access-7h6zr") pod "827243a4-101f-49ab-8219-24fae0a7ea82" (UID: "827243a4-101f-49ab-8219-24fae0a7ea82"). InnerVolumeSpecName "kube-api-access-7h6zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.274597 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827243a4-101f-49ab-8219-24fae0a7ea82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "827243a4-101f-49ab-8219-24fae0a7ea82" (UID: "827243a4-101f-49ab-8219-24fae0a7ea82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.276502 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827243a4-101f-49ab-8219-24fae0a7ea82-config-data" (OuterVolumeSpecName: "config-data") pod "827243a4-101f-49ab-8219-24fae0a7ea82" (UID: "827243a4-101f-49ab-8219-24fae0a7ea82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.296416 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.336254 4482 generic.go:334] "Generic (PLEG): container finished" podID="827243a4-101f-49ab-8219-24fae0a7ea82" containerID="38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa" exitCode=0 Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.336320 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"827243a4-101f-49ab-8219-24fae0a7ea82","Type":"ContainerDied","Data":"38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa"} Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.336349 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"827243a4-101f-49ab-8219-24fae0a7ea82","Type":"ContainerDied","Data":"f39a4891ee2e90ba00c1c2f68e7287b75fff9294421af995ee8f2f99dd48da0d"} Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.336368 4482 scope.go:117] "RemoveContainer" containerID="38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.336474 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.346028 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c519179c-8a71-441f-ae4f-ca6224e057fb","Type":"ContainerStarted","Data":"606561a796469fc42090f6035498b1449cc2ae567aea1a9a3ae0b40200c2ea9e"} Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.350748 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827243a4-101f-49ab-8219-24fae0a7ea82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.350772 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h6zr\" (UniqueName: \"kubernetes.io/projected/827243a4-101f-49ab-8219-24fae0a7ea82-kube-api-access-7h6zr\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.350783 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827243a4-101f-49ab-8219-24fae0a7ea82-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.359558 4482 scope.go:117] "RemoveContainer" containerID="38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa" Nov 25 07:06:07 crc kubenswrapper[4482]: E1125 07:06:07.359931 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa\": container with ID starting with 38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa not found: ID does not exist" containerID="38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.359965 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa"} err="failed to get container status \"38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa\": rpc error: code = NotFound desc = could not find container \"38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa\": container with ID starting with 38086e663a5e577256e8f0a7cc517dc1ab2aee17ca4613560dab47593c0c8efa not found: ID does not exist" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.365329 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.375943 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.383869 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:06:07 crc kubenswrapper[4482]: E1125 07:06:07.385354 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="827243a4-101f-49ab-8219-24fae0a7ea82" containerName="nova-scheduler-scheduler" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.385377 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="827243a4-101f-49ab-8219-24fae0a7ea82" containerName="nova-scheduler-scheduler" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.386753 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="827243a4-101f-49ab-8219-24fae0a7ea82" containerName="nova-scheduler-scheduler" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.387993 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.390619 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.439456 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.454520 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6242fc2-4d80-4b9f-aecc-bd90894cda99-config-data\") pod \"nova-scheduler-0\" (UID: \"e6242fc2-4d80-4b9f-aecc-bd90894cda99\") " pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.454772 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmfvr\" (UniqueName: \"kubernetes.io/projected/e6242fc2-4d80-4b9f-aecc-bd90894cda99-kube-api-access-kmfvr\") pod \"nova-scheduler-0\" (UID: \"e6242fc2-4d80-4b9f-aecc-bd90894cda99\") " pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.455009 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6242fc2-4d80-4b9f-aecc-bd90894cda99-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e6242fc2-4d80-4b9f-aecc-bd90894cda99\") " pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.557929 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6242fc2-4d80-4b9f-aecc-bd90894cda99-config-data\") pod \"nova-scheduler-0\" (UID: \"e6242fc2-4d80-4b9f-aecc-bd90894cda99\") " pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.558012 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmfvr\" (UniqueName: \"kubernetes.io/projected/e6242fc2-4d80-4b9f-aecc-bd90894cda99-kube-api-access-kmfvr\") pod \"nova-scheduler-0\" (UID: \"e6242fc2-4d80-4b9f-aecc-bd90894cda99\") " pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.558102 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6242fc2-4d80-4b9f-aecc-bd90894cda99-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e6242fc2-4d80-4b9f-aecc-bd90894cda99\") " pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.564025 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6242fc2-4d80-4b9f-aecc-bd90894cda99-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e6242fc2-4d80-4b9f-aecc-bd90894cda99\") " pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.564288 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6242fc2-4d80-4b9f-aecc-bd90894cda99-config-data\") pod \"nova-scheduler-0\" (UID: \"e6242fc2-4d80-4b9f-aecc-bd90894cda99\") " pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.579474 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmfvr\" (UniqueName: \"kubernetes.io/projected/e6242fc2-4d80-4b9f-aecc-bd90894cda99-kube-api-access-kmfvr\") pod \"nova-scheduler-0\" (UID: \"e6242fc2-4d80-4b9f-aecc-bd90894cda99\") " pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.735961 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.856804 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67bf34f2-664a-4065-88a4-115114e4d445" path="/var/lib/kubelet/pods/67bf34f2-664a-4065-88a4-115114e4d445/volumes" Nov 25 07:06:07 crc kubenswrapper[4482]: I1125 07:06:07.857878 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="827243a4-101f-49ab-8219-24fae0a7ea82" path="/var/lib/kubelet/pods/827243a4-101f-49ab-8219-24fae0a7ea82/volumes" Nov 25 07:06:08 crc kubenswrapper[4482]: I1125 07:06:08.357576 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:06:08 crc kubenswrapper[4482]: I1125 07:06:08.374302 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c519179c-8a71-441f-ae4f-ca6224e057fb","Type":"ContainerStarted","Data":"e290892ca0d8db80040173bccbbfca07c85f664952530169f7197480d0ad710c"} Nov 25 07:06:08 crc kubenswrapper[4482]: I1125 07:06:08.374385 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c519179c-8a71-441f-ae4f-ca6224e057fb","Type":"ContainerStarted","Data":"c8a4b57ead11ccb4975159317389eb25214481d90c3ca222809c40b29c15c79f"} Nov 25 07:06:08 crc kubenswrapper[4482]: I1125 07:06:08.418194 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.41816092 podStartE2EDuration="2.41816092s" podCreationTimestamp="2025-11-25 07:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:06:08.407057167 +0000 UTC m=+1142.895288426" watchObservedRunningTime="2025-11-25 07:06:08.41816092 +0000 UTC m=+1142.906392179" Nov 25 07:06:09 crc kubenswrapper[4482]: I1125 07:06:09.118162 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:06:09 crc kubenswrapper[4482]: I1125 07:06:09.118551 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:06:09 crc kubenswrapper[4482]: I1125 07:06:09.391140 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e6242fc2-4d80-4b9f-aecc-bd90894cda99","Type":"ContainerStarted","Data":"409c5100bb6db4b265426b530315d52b042b6779d2526693feb4749a9884ee7f"} Nov 25 07:06:09 crc kubenswrapper[4482]: I1125 07:06:09.392009 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e6242fc2-4d80-4b9f-aecc-bd90894cda99","Type":"ContainerStarted","Data":"37c982e777d3aba6fc8482507246328db08ad3b8ea8499e8dff71e5c6b52dab6"} Nov 25 07:06:09 crc kubenswrapper[4482]: I1125 07:06:09.407206 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.407152923 podStartE2EDuration="2.407152923s" podCreationTimestamp="2025-11-25 07:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:06:09.40544867 +0000 UTC m=+1143.893679930" watchObservedRunningTime="2025-11-25 07:06:09.407152923 +0000 UTC m=+1143.895384182" Nov 25 07:06:11 crc kubenswrapper[4482]: I1125 07:06:11.772326 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 07:06:11 crc kubenswrapper[4482]: I1125 07:06:11.773011 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 07:06:12 crc kubenswrapper[4482]: I1125 07:06:12.738455 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 07:06:13 crc kubenswrapper[4482]: I1125 07:06:13.675864 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 07:06:13 crc kubenswrapper[4482]: I1125 07:06:13.676155 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 07:06:14 crc kubenswrapper[4482]: I1125 07:06:14.686318 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ced93f5f-b004-4734-b912-3510890d217c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 07:06:14 crc kubenswrapper[4482]: I1125 07:06:14.686346 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ced93f5f-b004-4734-b912-3510890d217c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 07:06:16 crc kubenswrapper[4482]: I1125 07:06:16.772202 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 07:06:16 crc kubenswrapper[4482]: I1125 07:06:16.772488 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 07:06:17 crc kubenswrapper[4482]: I1125 07:06:17.738251 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 07:06:17 crc kubenswrapper[4482]: I1125 07:06:17.768088 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 07:06:17 crc kubenswrapper[4482]: I1125 07:06:17.786323 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c519179c-8a71-441f-ae4f-ca6224e057fb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 07:06:17 crc kubenswrapper[4482]: I1125 07:06:17.786706 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c519179c-8a71-441f-ae4f-ca6224e057fb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 07:06:18 crc kubenswrapper[4482]: I1125 07:06:18.536730 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 07:06:21 crc kubenswrapper[4482]: I1125 07:06:21.506692 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 07:06:23 crc kubenswrapper[4482]: I1125 07:06:23.681397 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 07:06:23 crc kubenswrapper[4482]: I1125 07:06:23.682643 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 07:06:23 crc kubenswrapper[4482]: I1125 07:06:23.683527 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 07:06:23 crc kubenswrapper[4482]: I1125 07:06:23.687755 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 07:06:24 crc kubenswrapper[4482]: I1125 07:06:24.578496 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 07:06:24 crc kubenswrapper[4482]: I1125 07:06:24.584650 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 07:06:26 crc kubenswrapper[4482]: I1125 07:06:26.778046 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 07:06:26 crc kubenswrapper[4482]: I1125 07:06:26.780893 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 07:06:26 crc kubenswrapper[4482]: I1125 07:06:26.783971 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 07:06:27 crc kubenswrapper[4482]: I1125 07:06:27.612383 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 07:06:34 crc kubenswrapper[4482]: I1125 07:06:34.305857 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:06:35 crc kubenswrapper[4482]: I1125 07:06:35.244762 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:06:39 crc kubenswrapper[4482]: I1125 07:06:39.020235 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="80610219-52d0-4832-9586-5f565148e662" containerName="rabbitmq" containerID="cri-o://1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a" gracePeriod=604796 Nov 25 07:06:39 crc kubenswrapper[4482]: I1125 07:06:39.118027 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:06:39 crc kubenswrapper[4482]: I1125 07:06:39.118080 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:06:39 crc kubenswrapper[4482]: I1125 07:06:39.118117 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 07:06:39 crc kubenswrapper[4482]: I1125 07:06:39.118607 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74ac51368ca9a85524d27db3fb42de85573ff45ef8883e47eb5fe2759d039e48"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:06:39 crc kubenswrapper[4482]: I1125 07:06:39.118659 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://74ac51368ca9a85524d27db3fb42de85573ff45ef8883e47eb5fe2759d039e48" gracePeriod=600 Nov 25 07:06:39 crc kubenswrapper[4482]: I1125 07:06:39.296447 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e0f200db-f6f1-403b-bad6-85a803b5237c" containerName="rabbitmq" containerID="cri-o://b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949" gracePeriod=604796 Nov 25 07:06:39 crc kubenswrapper[4482]: I1125 07:06:39.746551 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="74ac51368ca9a85524d27db3fb42de85573ff45ef8883e47eb5fe2759d039e48" exitCode=0 Nov 25 07:06:39 crc kubenswrapper[4482]: I1125 07:06:39.746620 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"74ac51368ca9a85524d27db3fb42de85573ff45ef8883e47eb5fe2759d039e48"} Nov 25 07:06:39 crc kubenswrapper[4482]: I1125 07:06:39.746836 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"63bdd9f0fce14d34b7bf553de17b7114201d3cbf1828eb48f5089e09d1c6eec0"} Nov 25 07:06:39 crc kubenswrapper[4482]: I1125 07:06:39.746863 4482 scope.go:117] "RemoveContainer" containerID="6be423e1d99d845691f688b98451ff731b0a6e0f033aa86bb907250d322d441c" Nov 25 07:06:40 crc kubenswrapper[4482]: I1125 07:06:40.346078 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="80610219-52d0-4832-9586-5f565148e662" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 25 07:06:40 crc kubenswrapper[4482]: I1125 07:06:40.640902 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e0f200db-f6f1-403b-bad6-85a803b5237c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.589419 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.665088 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-rabbitmq-tls\") pod \"80610219-52d0-4832-9586-5f565148e662\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.665434 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s96gq\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-kube-api-access-s96gq\") pod \"80610219-52d0-4832-9586-5f565148e662\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.665486 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80610219-52d0-4832-9586-5f565148e662-pod-info\") pod \"80610219-52d0-4832-9586-5f565148e662\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.665532 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-rabbitmq-confd\") pod \"80610219-52d0-4832-9586-5f565148e662\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.665564 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-plugins-conf\") pod \"80610219-52d0-4832-9586-5f565148e662\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.665581 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-config-data\") pod \"80610219-52d0-4832-9586-5f565148e662\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.665612 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80610219-52d0-4832-9586-5f565148e662-rabbitmq-erlang-cookie\") pod \"80610219-52d0-4832-9586-5f565148e662\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.665635 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80610219-52d0-4832-9586-5f565148e662-rabbitmq-plugins\") pod \"80610219-52d0-4832-9586-5f565148e662\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.665672 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-server-conf\") pod \"80610219-52d0-4832-9586-5f565148e662\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.671648 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80610219-52d0-4832-9586-5f565148e662-erlang-cookie-secret\") pod \"80610219-52d0-4832-9586-5f565148e662\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.671683 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"80610219-52d0-4832-9586-5f565148e662\" (UID: \"80610219-52d0-4832-9586-5f565148e662\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.671899 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80610219-52d0-4832-9586-5f565148e662-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "80610219-52d0-4832-9586-5f565148e662" (UID: "80610219-52d0-4832-9586-5f565148e662"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.671986 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "80610219-52d0-4832-9586-5f565148e662" (UID: "80610219-52d0-4832-9586-5f565148e662"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.672830 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80610219-52d0-4832-9586-5f565148e662-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "80610219-52d0-4832-9586-5f565148e662" (UID: "80610219-52d0-4832-9586-5f565148e662"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.672890 4482 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.672907 4482 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80610219-52d0-4832-9586-5f565148e662-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.707719 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80610219-52d0-4832-9586-5f565148e662-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "80610219-52d0-4832-9586-5f565148e662" (UID: "80610219-52d0-4832-9586-5f565148e662"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.708055 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/80610219-52d0-4832-9586-5f565148e662-pod-info" (OuterVolumeSpecName: "pod-info") pod "80610219-52d0-4832-9586-5f565148e662" (UID: "80610219-52d0-4832-9586-5f565148e662"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.708267 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-kube-api-access-s96gq" (OuterVolumeSpecName: "kube-api-access-s96gq") pod "80610219-52d0-4832-9586-5f565148e662" (UID: "80610219-52d0-4832-9586-5f565148e662"). InnerVolumeSpecName "kube-api-access-s96gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.709138 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "80610219-52d0-4832-9586-5f565148e662" (UID: "80610219-52d0-4832-9586-5f565148e662"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.727474 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "80610219-52d0-4832-9586-5f565148e662" (UID: "80610219-52d0-4832-9586-5f565148e662"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.750718 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-config-data" (OuterVolumeSpecName: "config-data") pod "80610219-52d0-4832-9586-5f565148e662" (UID: "80610219-52d0-4832-9586-5f565148e662"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.777036 4482 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.777072 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s96gq\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-kube-api-access-s96gq\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.777084 4482 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80610219-52d0-4832-9586-5f565148e662-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.777093 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.777106 4482 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80610219-52d0-4832-9586-5f565148e662-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.777116 4482 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80610219-52d0-4832-9586-5f565148e662-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.777148 4482 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.779228 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-server-conf" (OuterVolumeSpecName: "server-conf") pod "80610219-52d0-4832-9586-5f565148e662" (UID: "80610219-52d0-4832-9586-5f565148e662"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.782609 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.810431 4482 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.835178 4482 generic.go:334] "Generic (PLEG): container finished" podID="80610219-52d0-4832-9586-5f565148e662" containerID="1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a" exitCode=0 Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.835316 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.838420 4482 generic.go:334] "Generic (PLEG): container finished" podID="e0f200db-f6f1-403b-bad6-85a803b5237c" containerID="b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949" exitCode=0 Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.843083 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.877742 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80610219-52d0-4832-9586-5f565148e662","Type":"ContainerDied","Data":"1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a"} Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.877790 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80610219-52d0-4832-9586-5f565148e662","Type":"ContainerDied","Data":"a0cfdde975fd2197382ddfd7497534314ae85307bdff34c70db5cebfee330941"} Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.877802 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0f200db-f6f1-403b-bad6-85a803b5237c","Type":"ContainerDied","Data":"b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949"} Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.877813 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0f200db-f6f1-403b-bad6-85a803b5237c","Type":"ContainerDied","Data":"a5a69402ad8513413eb76851255f730ef202704c9dea30790bf94e220e98052c"} Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.877832 4482 scope.go:117] "RemoveContainer" containerID="1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.878703 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-tls\") pod \"e0f200db-f6f1-403b-bad6-85a803b5237c\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.880377 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-plugins\") pod \"e0f200db-f6f1-403b-bad6-85a803b5237c\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.880568 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7smv\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-kube-api-access-m7smv\") pod \"e0f200db-f6f1-403b-bad6-85a803b5237c\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.880959 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0f200db-f6f1-403b-bad6-85a803b5237c-pod-info\") pod \"e0f200db-f6f1-403b-bad6-85a803b5237c\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.881059 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-erlang-cookie\") pod \"e0f200db-f6f1-403b-bad6-85a803b5237c\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.885804 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-config-data\") pod \"e0f200db-f6f1-403b-bad6-85a803b5237c\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.884289 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e0f200db-f6f1-403b-bad6-85a803b5237c" (UID: "e0f200db-f6f1-403b-bad6-85a803b5237c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.889259 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"e0f200db-f6f1-403b-bad6-85a803b5237c\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.889463 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-server-conf\") pod \"e0f200db-f6f1-403b-bad6-85a803b5237c\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.889529 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0f200db-f6f1-403b-bad6-85a803b5237c-erlang-cookie-secret\") pod \"e0f200db-f6f1-403b-bad6-85a803b5237c\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.889591 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-confd\") pod \"e0f200db-f6f1-403b-bad6-85a803b5237c\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.890042 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-plugins-conf\") pod \"e0f200db-f6f1-403b-bad6-85a803b5237c\" (UID: \"e0f200db-f6f1-403b-bad6-85a803b5237c\") " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.890803 4482 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80610219-52d0-4832-9586-5f565148e662-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.890895 4482 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.890943 4482 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.892837 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e0f200db-f6f1-403b-bad6-85a803b5237c" (UID: "e0f200db-f6f1-403b-bad6-85a803b5237c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.894728 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e0f200db-f6f1-403b-bad6-85a803b5237c" (UID: "e0f200db-f6f1-403b-bad6-85a803b5237c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.895427 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e0f200db-f6f1-403b-bad6-85a803b5237c" (UID: "e0f200db-f6f1-403b-bad6-85a803b5237c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.895553 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-kube-api-access-m7smv" (OuterVolumeSpecName: "kube-api-access-m7smv") pod "e0f200db-f6f1-403b-bad6-85a803b5237c" (UID: "e0f200db-f6f1-403b-bad6-85a803b5237c"). InnerVolumeSpecName "kube-api-access-m7smv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.900349 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "e0f200db-f6f1-403b-bad6-85a803b5237c" (UID: "e0f200db-f6f1-403b-bad6-85a803b5237c"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.900392 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f200db-f6f1-403b-bad6-85a803b5237c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e0f200db-f6f1-403b-bad6-85a803b5237c" (UID: "e0f200db-f6f1-403b-bad6-85a803b5237c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.905431 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e0f200db-f6f1-403b-bad6-85a803b5237c-pod-info" (OuterVolumeSpecName: "pod-info") pod "e0f200db-f6f1-403b-bad6-85a803b5237c" (UID: "e0f200db-f6f1-403b-bad6-85a803b5237c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.911556 4482 scope.go:117] "RemoveContainer" containerID="0396b2915b1de9596b94bd5ccabe4b7d37ef65c00b8c74d279472bd9e3cd96bd" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.911744 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "80610219-52d0-4832-9586-5f565148e662" (UID: "80610219-52d0-4832-9586-5f565148e662"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.938341 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-config-data" (OuterVolumeSpecName: "config-data") pod "e0f200db-f6f1-403b-bad6-85a803b5237c" (UID: "e0f200db-f6f1-403b-bad6-85a803b5237c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.943085 4482 scope.go:117] "RemoveContainer" containerID="1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a" Nov 25 07:06:45 crc kubenswrapper[4482]: E1125 07:06:45.945702 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a\": container with ID starting with 1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a not found: ID does not exist" containerID="1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.945745 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a"} err="failed to get container status \"1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a\": rpc error: code = NotFound desc = could not find container \"1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a\": container with ID starting with 1a5c32b21846c99328ba3f94f60f130e3582b43f3d67d85cd291ea8e87e7780a not found: ID does not exist" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.945772 4482 scope.go:117] "RemoveContainer" containerID="0396b2915b1de9596b94bd5ccabe4b7d37ef65c00b8c74d279472bd9e3cd96bd" Nov 25 07:06:45 crc kubenswrapper[4482]: E1125 07:06:45.946681 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0396b2915b1de9596b94bd5ccabe4b7d37ef65c00b8c74d279472bd9e3cd96bd\": container with ID starting with 0396b2915b1de9596b94bd5ccabe4b7d37ef65c00b8c74d279472bd9e3cd96bd not found: ID does not exist" containerID="0396b2915b1de9596b94bd5ccabe4b7d37ef65c00b8c74d279472bd9e3cd96bd" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.946714 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0396b2915b1de9596b94bd5ccabe4b7d37ef65c00b8c74d279472bd9e3cd96bd"} err="failed to get container status \"0396b2915b1de9596b94bd5ccabe4b7d37ef65c00b8c74d279472bd9e3cd96bd\": rpc error: code = NotFound desc = could not find container \"0396b2915b1de9596b94bd5ccabe4b7d37ef65c00b8c74d279472bd9e3cd96bd\": container with ID starting with 0396b2915b1de9596b94bd5ccabe4b7d37ef65c00b8c74d279472bd9e3cd96bd not found: ID does not exist" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.946734 4482 scope.go:117] "RemoveContainer" containerID="b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.960766 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-server-conf" (OuterVolumeSpecName: "server-conf") pod "e0f200db-f6f1-403b-bad6-85a803b5237c" (UID: "e0f200db-f6f1-403b-bad6-85a803b5237c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.967624 4482 scope.go:117] "RemoveContainer" containerID="5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.993399 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.993443 4482 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.993456 4482 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.993465 4482 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0f200db-f6f1-403b-bad6-85a803b5237c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.993474 4482 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0f200db-f6f1-403b-bad6-85a803b5237c-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.993483 4482 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.993491 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7smv\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-kube-api-access-m7smv\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.994061 4482 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0f200db-f6f1-403b-bad6-85a803b5237c-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.994072 4482 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.994080 4482 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80610219-52d0-4832-9586-5f565148e662-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.997418 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e0f200db-f6f1-403b-bad6-85a803b5237c" (UID: "e0f200db-f6f1-403b-bad6-85a803b5237c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.997488 4482 scope.go:117] "RemoveContainer" containerID="b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949" Nov 25 07:06:45 crc kubenswrapper[4482]: E1125 07:06:45.997927 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949\": container with ID starting with b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949 not found: ID does not exist" containerID="b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.997960 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949"} err="failed to get container status \"b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949\": rpc error: code = NotFound desc = could not find container \"b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949\": container with ID starting with b9ee88f6fb40d3c2e01380c5823836e008c41b240f29ea00547428c9f402b949 not found: ID does not exist" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.997982 4482 scope.go:117] "RemoveContainer" containerID="5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9" Nov 25 07:06:45 crc kubenswrapper[4482]: E1125 07:06:45.998302 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9\": container with ID starting with 5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9 not found: ID does not exist" containerID="5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9" Nov 25 07:06:45 crc kubenswrapper[4482]: I1125 07:06:45.998332 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9"} err="failed to get container status \"5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9\": rpc error: code = NotFound desc = could not find container \"5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9\": container with ID starting with 5bb777607e066d395aae0c154642d129445b86b639d03147b2ce17c71317f3f9 not found: ID does not exist" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.011675 4482 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.095015 4482 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.095043 4482 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0f200db-f6f1-403b-bad6-85a803b5237c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.247194 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.257690 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.273450 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.280426 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.292928 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:06:46 crc kubenswrapper[4482]: E1125 07:06:46.293315 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f200db-f6f1-403b-bad6-85a803b5237c" containerName="setup-container" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.293335 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f200db-f6f1-403b-bad6-85a803b5237c" containerName="setup-container" Nov 25 07:06:46 crc kubenswrapper[4482]: E1125 07:06:46.293349 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80610219-52d0-4832-9586-5f565148e662" containerName="setup-container" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.293355 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="80610219-52d0-4832-9586-5f565148e662" containerName="setup-container" Nov 25 07:06:46 crc kubenswrapper[4482]: E1125 07:06:46.293380 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80610219-52d0-4832-9586-5f565148e662" containerName="rabbitmq" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.293385 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="80610219-52d0-4832-9586-5f565148e662" containerName="rabbitmq" Nov 25 07:06:46 crc kubenswrapper[4482]: E1125 07:06:46.293393 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f200db-f6f1-403b-bad6-85a803b5237c" containerName="rabbitmq" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.293397 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f200db-f6f1-403b-bad6-85a803b5237c" containerName="rabbitmq" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.293584 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f200db-f6f1-403b-bad6-85a803b5237c" containerName="rabbitmq" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.293601 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="80610219-52d0-4832-9586-5f565148e662" containerName="rabbitmq" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.295390 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.302104 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.302318 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.303224 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.308262 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-z2r8l" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.308400 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.308774 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.308780 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.310212 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.311450 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.312222 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.313192 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.313354 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.313676 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.323532 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.339802 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.355771 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.356106 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-v98p2" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.356136 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.460292 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78685f49d5-v8gp6"] Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.462790 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.467306 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.474254 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78685f49d5-v8gp6"] Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.511861 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7ce3d46-19fe-494c-a2ce-44ca82debd20-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.511918 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7ce3d46-19fe-494c-a2ce-44ca82debd20-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512097 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/874cee1f-6776-46a5-b8bb-bed0bf553194-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512149 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/874cee1f-6776-46a5-b8bb-bed0bf553194-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512216 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/874cee1f-6776-46a5-b8bb-bed0bf553194-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512277 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7ce3d46-19fe-494c-a2ce-44ca82debd20-config-data\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512345 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/874cee1f-6776-46a5-b8bb-bed0bf553194-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512376 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/874cee1f-6776-46a5-b8bb-bed0bf553194-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512398 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/874cee1f-6776-46a5-b8bb-bed0bf553194-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512461 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xfv7\" (UniqueName: \"kubernetes.io/projected/874cee1f-6776-46a5-b8bb-bed0bf553194-kube-api-access-5xfv7\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512526 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/874cee1f-6776-46a5-b8bb-bed0bf553194-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512557 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7ce3d46-19fe-494c-a2ce-44ca82debd20-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512701 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/874cee1f-6776-46a5-b8bb-bed0bf553194-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512743 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7ce3d46-19fe-494c-a2ce-44ca82debd20-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512766 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/874cee1f-6776-46a5-b8bb-bed0bf553194-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512919 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7ce3d46-19fe-494c-a2ce-44ca82debd20-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.512974 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.513016 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.513093 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7ce3d46-19fe-494c-a2ce-44ca82debd20-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.513129 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks95j\" (UniqueName: \"kubernetes.io/projected/e7ce3d46-19fe-494c-a2ce-44ca82debd20-kube-api-access-ks95j\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.513201 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7ce3d46-19fe-494c-a2ce-44ca82debd20-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.513223 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7ce3d46-19fe-494c-a2ce-44ca82debd20-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.615903 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-dns-svc\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.615958 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/874cee1f-6776-46a5-b8bb-bed0bf553194-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.615979 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7ce3d46-19fe-494c-a2ce-44ca82debd20-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.615998 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/874cee1f-6776-46a5-b8bb-bed0bf553194-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616032 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7ce3d46-19fe-494c-a2ce-44ca82debd20-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616053 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616076 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616102 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7ce3d46-19fe-494c-a2ce-44ca82debd20-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616123 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-config\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616144 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks95j\" (UniqueName: \"kubernetes.io/projected/e7ce3d46-19fe-494c-a2ce-44ca82debd20-kube-api-access-ks95j\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616188 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7ce3d46-19fe-494c-a2ce-44ca82debd20-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616211 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7ce3d46-19fe-494c-a2ce-44ca82debd20-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616240 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-dns-swift-storage-0\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616262 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7ce3d46-19fe-494c-a2ce-44ca82debd20-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616280 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7ce3d46-19fe-494c-a2ce-44ca82debd20-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616336 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-ovsdbserver-nb\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616367 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-openstack-edpm-ipam\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616391 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/874cee1f-6776-46a5-b8bb-bed0bf553194-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616414 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/874cee1f-6776-46a5-b8bb-bed0bf553194-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616437 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/874cee1f-6776-46a5-b8bb-bed0bf553194-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616456 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7ce3d46-19fe-494c-a2ce-44ca82debd20-config-data\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616479 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/874cee1f-6776-46a5-b8bb-bed0bf553194-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616499 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/874cee1f-6776-46a5-b8bb-bed0bf553194-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616514 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/874cee1f-6776-46a5-b8bb-bed0bf553194-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616537 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xfv7\" (UniqueName: \"kubernetes.io/projected/874cee1f-6776-46a5-b8bb-bed0bf553194-kube-api-access-5xfv7\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616565 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/874cee1f-6776-46a5-b8bb-bed0bf553194-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616586 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzsl4\" (UniqueName: \"kubernetes.io/projected/be5da322-ab33-4deb-8049-91903df11263-kube-api-access-nzsl4\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616605 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-ovsdbserver-sb\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.616624 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7ce3d46-19fe-494c-a2ce-44ca82debd20-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.617154 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/874cee1f-6776-46a5-b8bb-bed0bf553194-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.617604 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7ce3d46-19fe-494c-a2ce-44ca82debd20-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.618164 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.618284 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.618456 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7ce3d46-19fe-494c-a2ce-44ca82debd20-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.618957 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/874cee1f-6776-46a5-b8bb-bed0bf553194-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.619603 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7ce3d46-19fe-494c-a2ce-44ca82debd20-config-data\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.620413 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7ce3d46-19fe-494c-a2ce-44ca82debd20-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.621409 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/874cee1f-6776-46a5-b8bb-bed0bf553194-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.621624 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7ce3d46-19fe-494c-a2ce-44ca82debd20-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.622405 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/874cee1f-6776-46a5-b8bb-bed0bf553194-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.624970 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/874cee1f-6776-46a5-b8bb-bed0bf553194-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.628750 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7ce3d46-19fe-494c-a2ce-44ca82debd20-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.633492 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/874cee1f-6776-46a5-b8bb-bed0bf553194-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.633765 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/874cee1f-6776-46a5-b8bb-bed0bf553194-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.634766 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/874cee1f-6776-46a5-b8bb-bed0bf553194-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.638981 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xfv7\" (UniqueName: \"kubernetes.io/projected/874cee1f-6776-46a5-b8bb-bed0bf553194-kube-api-access-5xfv7\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.640332 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/874cee1f-6776-46a5-b8bb-bed0bf553194-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.640745 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7ce3d46-19fe-494c-a2ce-44ca82debd20-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.643188 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7ce3d46-19fe-494c-a2ce-44ca82debd20-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.646134 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7ce3d46-19fe-494c-a2ce-44ca82debd20-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.646142 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks95j\" (UniqueName: \"kubernetes.io/projected/e7ce3d46-19fe-494c-a2ce-44ca82debd20-kube-api-access-ks95j\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.668324 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"e7ce3d46-19fe-494c-a2ce-44ca82debd20\") " pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.671932 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"874cee1f-6776-46a5-b8bb-bed0bf553194\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.674299 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.724052 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-config\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.724119 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-dns-swift-storage-0\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.724179 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-ovsdbserver-nb\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.724212 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-openstack-edpm-ipam\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.724276 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzsl4\" (UniqueName: \"kubernetes.io/projected/be5da322-ab33-4deb-8049-91903df11263-kube-api-access-nzsl4\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.724292 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-ovsdbserver-sb\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.724318 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-dns-svc\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.725091 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-dns-swift-storage-0\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.725752 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-dns-svc\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.728608 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-openstack-edpm-ipam\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.728828 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-ovsdbserver-sb\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.728833 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-config\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.729110 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-ovsdbserver-nb\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.741875 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzsl4\" (UniqueName: \"kubernetes.io/projected/be5da322-ab33-4deb-8049-91903df11263-kube-api-access-nzsl4\") pod \"dnsmasq-dns-78685f49d5-v8gp6\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.782502 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:46 crc kubenswrapper[4482]: I1125 07:06:46.972465 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:06:47 crc kubenswrapper[4482]: I1125 07:06:47.196957 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:06:47 crc kubenswrapper[4482]: I1125 07:06:47.296904 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78685f49d5-v8gp6"] Nov 25 07:06:47 crc kubenswrapper[4482]: I1125 07:06:47.438807 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:06:47 crc kubenswrapper[4482]: W1125 07:06:47.449084 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod874cee1f_6776_46a5_b8bb_bed0bf553194.slice/crio-58c610ef0191c84e7f334e49497a0f2856786b27eb6e639121e6089526ac5e82 WatchSource:0}: Error finding container 58c610ef0191c84e7f334e49497a0f2856786b27eb6e639121e6089526ac5e82: Status 404 returned error can't find the container with id 58c610ef0191c84e7f334e49497a0f2856786b27eb6e639121e6089526ac5e82 Nov 25 07:06:47 crc kubenswrapper[4482]: I1125 07:06:47.845853 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80610219-52d0-4832-9586-5f565148e662" path="/var/lib/kubelet/pods/80610219-52d0-4832-9586-5f565148e662/volumes" Nov 25 07:06:47 crc kubenswrapper[4482]: I1125 07:06:47.847437 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f200db-f6f1-403b-bad6-85a803b5237c" path="/var/lib/kubelet/pods/e0f200db-f6f1-403b-bad6-85a803b5237c/volumes" Nov 25 07:06:47 crc kubenswrapper[4482]: I1125 07:06:47.903240 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7ce3d46-19fe-494c-a2ce-44ca82debd20","Type":"ContainerStarted","Data":"c59524fe2a2ad47cb0d2f993f1175ef7a2154e8eda5d66e6f950e55b7ea38806"} Nov 25 07:06:47 crc kubenswrapper[4482]: I1125 07:06:47.905129 4482 generic.go:334] "Generic (PLEG): container finished" podID="be5da322-ab33-4deb-8049-91903df11263" containerID="b9d187221173308ef3b60bd11313234610f8c6c2cc858463008afed47a5d4c34" exitCode=0 Nov 25 07:06:47 crc kubenswrapper[4482]: I1125 07:06:47.905214 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" event={"ID":"be5da322-ab33-4deb-8049-91903df11263","Type":"ContainerDied","Data":"b9d187221173308ef3b60bd11313234610f8c6c2cc858463008afed47a5d4c34"} Nov 25 07:06:47 crc kubenswrapper[4482]: I1125 07:06:47.905246 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" event={"ID":"be5da322-ab33-4deb-8049-91903df11263","Type":"ContainerStarted","Data":"b1a5c355f8e9e51cb411360e637c7d54ed0d946ac27d6d9313f2191552189cb5"} Nov 25 07:06:47 crc kubenswrapper[4482]: I1125 07:06:47.907335 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"874cee1f-6776-46a5-b8bb-bed0bf553194","Type":"ContainerStarted","Data":"58c610ef0191c84e7f334e49497a0f2856786b27eb6e639121e6089526ac5e82"} Nov 25 07:06:48 crc kubenswrapper[4482]: I1125 07:06:48.923996 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" event={"ID":"be5da322-ab33-4deb-8049-91903df11263","Type":"ContainerStarted","Data":"64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10"} Nov 25 07:06:48 crc kubenswrapper[4482]: I1125 07:06:48.924554 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:48 crc kubenswrapper[4482]: I1125 07:06:48.926359 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7ce3d46-19fe-494c-a2ce-44ca82debd20","Type":"ContainerStarted","Data":"29389a4e0fcbcc017f6214ee29ef91f442034f29e7872d67b53946e5c80fa1dc"} Nov 25 07:06:48 crc kubenswrapper[4482]: I1125 07:06:48.944749 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" podStartSLOduration=2.944731234 podStartE2EDuration="2.944731234s" podCreationTimestamp="2025-11-25 07:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:06:48.937434069 +0000 UTC m=+1183.425665328" watchObservedRunningTime="2025-11-25 07:06:48.944731234 +0000 UTC m=+1183.432962493" Nov 25 07:06:49 crc kubenswrapper[4482]: I1125 07:06:49.939143 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"874cee1f-6776-46a5-b8bb-bed0bf553194","Type":"ContainerStarted","Data":"d3f4431f12ea8e2a7eac57c8eeb958000f3787f017b795fc317d7988b9ee3906"} Nov 25 07:06:56 crc kubenswrapper[4482]: I1125 07:06:56.784796 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:06:56 crc kubenswrapper[4482]: I1125 07:06:56.879498 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bf444cbf-szzdl"] Nov 25 07:06:56 crc kubenswrapper[4482]: I1125 07:06:56.880611 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" podUID="fcdb3d0c-8d88-49e0-b213-703b54444699" containerName="dnsmasq-dns" containerID="cri-o://d689afe052284a34af6acd3a9315547af9939279cb32e83f47259f5b91433500" gracePeriod=10 Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.025306 4482 generic.go:334] "Generic (PLEG): container finished" podID="fcdb3d0c-8d88-49e0-b213-703b54444699" containerID="d689afe052284a34af6acd3a9315547af9939279cb32e83f47259f5b91433500" exitCode=0 Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.025352 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" event={"ID":"fcdb3d0c-8d88-49e0-b213-703b54444699","Type":"ContainerDied","Data":"d689afe052284a34af6acd3a9315547af9939279cb32e83f47259f5b91433500"} Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.043944 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d57468c5-55th4"] Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.048278 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.090426 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d57468c5-55th4"] Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.158587 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-ovsdbserver-nb\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.158634 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.158728 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-config\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.158858 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-ovsdbserver-sb\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.158937 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-dns-svc\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.158985 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjdzs\" (UniqueName: \"kubernetes.io/projected/94d58437-c58e-4c82-bfde-7c2a5a2d7672-kube-api-access-hjdzs\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.159032 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-dns-swift-storage-0\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.264600 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-ovsdbserver-nb\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.264657 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.264698 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-config\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.264768 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-ovsdbserver-sb\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.264807 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-dns-svc\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.264834 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjdzs\" (UniqueName: \"kubernetes.io/projected/94d58437-c58e-4c82-bfde-7c2a5a2d7672-kube-api-access-hjdzs\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.264861 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-dns-swift-storage-0\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.265768 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-dns-swift-storage-0\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.266041 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-config\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.266659 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-ovsdbserver-nb\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.267053 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-dns-svc\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.267050 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.267512 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d58437-c58e-4c82-bfde-7c2a5a2d7672-ovsdbserver-sb\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.299429 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjdzs\" (UniqueName: \"kubernetes.io/projected/94d58437-c58e-4c82-bfde-7c2a5a2d7672-kube-api-access-hjdzs\") pod \"dnsmasq-dns-7d57468c5-55th4\" (UID: \"94d58437-c58e-4c82-bfde-7c2a5a2d7672\") " pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.395837 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.543055 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.674715 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l69zx\" (UniqueName: \"kubernetes.io/projected/fcdb3d0c-8d88-49e0-b213-703b54444699-kube-api-access-l69zx\") pod \"fcdb3d0c-8d88-49e0-b213-703b54444699\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.674853 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-dns-swift-storage-0\") pod \"fcdb3d0c-8d88-49e0-b213-703b54444699\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.674892 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-ovsdbserver-nb\") pod \"fcdb3d0c-8d88-49e0-b213-703b54444699\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.674950 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-ovsdbserver-sb\") pod \"fcdb3d0c-8d88-49e0-b213-703b54444699\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.674984 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-config\") pod \"fcdb3d0c-8d88-49e0-b213-703b54444699\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.675042 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-dns-svc\") pod \"fcdb3d0c-8d88-49e0-b213-703b54444699\" (UID: \"fcdb3d0c-8d88-49e0-b213-703b54444699\") " Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.681125 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcdb3d0c-8d88-49e0-b213-703b54444699-kube-api-access-l69zx" (OuterVolumeSpecName: "kube-api-access-l69zx") pod "fcdb3d0c-8d88-49e0-b213-703b54444699" (UID: "fcdb3d0c-8d88-49e0-b213-703b54444699"). InnerVolumeSpecName "kube-api-access-l69zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.725193 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fcdb3d0c-8d88-49e0-b213-703b54444699" (UID: "fcdb3d0c-8d88-49e0-b213-703b54444699"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.729146 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fcdb3d0c-8d88-49e0-b213-703b54444699" (UID: "fcdb3d0c-8d88-49e0-b213-703b54444699"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.729379 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-config" (OuterVolumeSpecName: "config") pod "fcdb3d0c-8d88-49e0-b213-703b54444699" (UID: "fcdb3d0c-8d88-49e0-b213-703b54444699"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.731662 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fcdb3d0c-8d88-49e0-b213-703b54444699" (UID: "fcdb3d0c-8d88-49e0-b213-703b54444699"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.733838 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fcdb3d0c-8d88-49e0-b213-703b54444699" (UID: "fcdb3d0c-8d88-49e0-b213-703b54444699"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.778091 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l69zx\" (UniqueName: \"kubernetes.io/projected/fcdb3d0c-8d88-49e0-b213-703b54444699-kube-api-access-l69zx\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.778131 4482 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.778144 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.778155 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.778183 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.778196 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcdb3d0c-8d88-49e0-b213-703b54444699-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:06:57 crc kubenswrapper[4482]: I1125 07:06:57.828229 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d57468c5-55th4"] Nov 25 07:06:57 crc kubenswrapper[4482]: W1125 07:06:57.836428 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94d58437_c58e_4c82_bfde_7c2a5a2d7672.slice/crio-0fd7a1a78ee49e51cfbd9e3673d8a941c7364cdd1e9a86c055a428c1058b0b31 WatchSource:0}: Error finding container 0fd7a1a78ee49e51cfbd9e3673d8a941c7364cdd1e9a86c055a428c1058b0b31: Status 404 returned error can't find the container with id 0fd7a1a78ee49e51cfbd9e3673d8a941c7364cdd1e9a86c055a428c1058b0b31 Nov 25 07:06:58 crc kubenswrapper[4482]: I1125 07:06:58.036398 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" event={"ID":"fcdb3d0c-8d88-49e0-b213-703b54444699","Type":"ContainerDied","Data":"5c5ed78306e7d45bfe50a8beaa1ec811c76c637952402efd9dcaf5d4fffa339d"} Nov 25 07:06:58 crc kubenswrapper[4482]: I1125 07:06:58.036499 4482 scope.go:117] "RemoveContainer" containerID="d689afe052284a34af6acd3a9315547af9939279cb32e83f47259f5b91433500" Nov 25 07:06:58 crc kubenswrapper[4482]: I1125 07:06:58.036427 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bf444cbf-szzdl" Nov 25 07:06:58 crc kubenswrapper[4482]: I1125 07:06:58.040321 4482 generic.go:334] "Generic (PLEG): container finished" podID="94d58437-c58e-4c82-bfde-7c2a5a2d7672" containerID="82099386213f0707d532722161a59d18dd878608e40788413d5aae5e161ef305" exitCode=0 Nov 25 07:06:58 crc kubenswrapper[4482]: I1125 07:06:58.040392 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d57468c5-55th4" event={"ID":"94d58437-c58e-4c82-bfde-7c2a5a2d7672","Type":"ContainerDied","Data":"82099386213f0707d532722161a59d18dd878608e40788413d5aae5e161ef305"} Nov 25 07:06:58 crc kubenswrapper[4482]: I1125 07:06:58.040417 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d57468c5-55th4" event={"ID":"94d58437-c58e-4c82-bfde-7c2a5a2d7672","Type":"ContainerStarted","Data":"0fd7a1a78ee49e51cfbd9e3673d8a941c7364cdd1e9a86c055a428c1058b0b31"} Nov 25 07:06:58 crc kubenswrapper[4482]: I1125 07:06:58.242855 4482 scope.go:117] "RemoveContainer" containerID="a8b9654fbd181e4336061e77b6962fbefc76916f4d36bd7548d76d292a43a0ea" Nov 25 07:06:58 crc kubenswrapper[4482]: I1125 07:06:58.288512 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bf444cbf-szzdl"] Nov 25 07:06:58 crc kubenswrapper[4482]: I1125 07:06:58.295401 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86bf444cbf-szzdl"] Nov 25 07:06:59 crc kubenswrapper[4482]: I1125 07:06:59.054477 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d57468c5-55th4" event={"ID":"94d58437-c58e-4c82-bfde-7c2a5a2d7672","Type":"ContainerStarted","Data":"11575f0d7a187a7de4790803a50795a911f42c61721ebdac078412b2cb1c0c69"} Nov 25 07:06:59 crc kubenswrapper[4482]: I1125 07:06:59.055079 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:06:59 crc kubenswrapper[4482]: I1125 07:06:59.074522 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d57468c5-55th4" podStartSLOduration=2.074502139 podStartE2EDuration="2.074502139s" podCreationTimestamp="2025-11-25 07:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:06:59.070670145 +0000 UTC m=+1193.558901404" watchObservedRunningTime="2025-11-25 07:06:59.074502139 +0000 UTC m=+1193.562733398" Nov 25 07:06:59 crc kubenswrapper[4482]: I1125 07:06:59.842200 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcdb3d0c-8d88-49e0-b213-703b54444699" path="/var/lib/kubelet/pods/fcdb3d0c-8d88-49e0-b213-703b54444699/volumes" Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.398416 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d57468c5-55th4" Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.444084 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78685f49d5-v8gp6"] Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.444353 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" podUID="be5da322-ab33-4deb-8049-91903df11263" containerName="dnsmasq-dns" containerID="cri-o://64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10" gracePeriod=10 Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.867673 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.901188 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-config\") pod \"be5da322-ab33-4deb-8049-91903df11263\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.901285 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-ovsdbserver-sb\") pod \"be5da322-ab33-4deb-8049-91903df11263\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.901318 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzsl4\" (UniqueName: \"kubernetes.io/projected/be5da322-ab33-4deb-8049-91903df11263-kube-api-access-nzsl4\") pod \"be5da322-ab33-4deb-8049-91903df11263\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.901335 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-dns-swift-storage-0\") pod \"be5da322-ab33-4deb-8049-91903df11263\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.901397 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-dns-svc\") pod \"be5da322-ab33-4deb-8049-91903df11263\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.901422 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-openstack-edpm-ipam\") pod \"be5da322-ab33-4deb-8049-91903df11263\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.901457 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-ovsdbserver-nb\") pod \"be5da322-ab33-4deb-8049-91903df11263\" (UID: \"be5da322-ab33-4deb-8049-91903df11263\") " Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.942667 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be5da322-ab33-4deb-8049-91903df11263-kube-api-access-nzsl4" (OuterVolumeSpecName: "kube-api-access-nzsl4") pod "be5da322-ab33-4deb-8049-91903df11263" (UID: "be5da322-ab33-4deb-8049-91903df11263"). InnerVolumeSpecName "kube-api-access-nzsl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.964729 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "be5da322-ab33-4deb-8049-91903df11263" (UID: "be5da322-ab33-4deb-8049-91903df11263"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.966232 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be5da322-ab33-4deb-8049-91903df11263" (UID: "be5da322-ab33-4deb-8049-91903df11263"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.976688 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "be5da322-ab33-4deb-8049-91903df11263" (UID: "be5da322-ab33-4deb-8049-91903df11263"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.978755 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be5da322-ab33-4deb-8049-91903df11263" (UID: "be5da322-ab33-4deb-8049-91903df11263"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.978759 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-config" (OuterVolumeSpecName: "config") pod "be5da322-ab33-4deb-8049-91903df11263" (UID: "be5da322-ab33-4deb-8049-91903df11263"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:07:07 crc kubenswrapper[4482]: I1125 07:07:07.982154 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "be5da322-ab33-4deb-8049-91903df11263" (UID: "be5da322-ab33-4deb-8049-91903df11263"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.003101 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.003130 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.003140 4482 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.003155 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzsl4\" (UniqueName: \"kubernetes.io/projected/be5da322-ab33-4deb-8049-91903df11263-kube-api-access-nzsl4\") on node \"crc\" DevicePath \"\"" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.003167 4482 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.003189 4482 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.003199 4482 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/be5da322-ab33-4deb-8049-91903df11263-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.154961 4482 generic.go:334] "Generic (PLEG): container finished" podID="be5da322-ab33-4deb-8049-91903df11263" containerID="64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10" exitCode=0 Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.155008 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" event={"ID":"be5da322-ab33-4deb-8049-91903df11263","Type":"ContainerDied","Data":"64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10"} Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.155043 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" event={"ID":"be5da322-ab33-4deb-8049-91903df11263","Type":"ContainerDied","Data":"b1a5c355f8e9e51cb411360e637c7d54ed0d946ac27d6d9313f2191552189cb5"} Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.155060 4482 scope.go:117] "RemoveContainer" containerID="64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.155214 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78685f49d5-v8gp6" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.180736 4482 scope.go:117] "RemoveContainer" containerID="b9d187221173308ef3b60bd11313234610f8c6c2cc858463008afed47a5d4c34" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.189462 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78685f49d5-v8gp6"] Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.195949 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78685f49d5-v8gp6"] Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.204690 4482 scope.go:117] "RemoveContainer" containerID="64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10" Nov 25 07:07:08 crc kubenswrapper[4482]: E1125 07:07:08.205109 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10\": container with ID starting with 64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10 not found: ID does not exist" containerID="64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.205164 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10"} err="failed to get container status \"64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10\": rpc error: code = NotFound desc = could not find container \"64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10\": container with ID starting with 64b953132c2dea885cf9a4f5412494cc4964b84e563c817c5c3625263423ee10 not found: ID does not exist" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.205227 4482 scope.go:117] "RemoveContainer" containerID="b9d187221173308ef3b60bd11313234610f8c6c2cc858463008afed47a5d4c34" Nov 25 07:07:08 crc kubenswrapper[4482]: E1125 07:07:08.205588 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9d187221173308ef3b60bd11313234610f8c6c2cc858463008afed47a5d4c34\": container with ID starting with b9d187221173308ef3b60bd11313234610f8c6c2cc858463008afed47a5d4c34 not found: ID does not exist" containerID="b9d187221173308ef3b60bd11313234610f8c6c2cc858463008afed47a5d4c34" Nov 25 07:07:08 crc kubenswrapper[4482]: I1125 07:07:08.205622 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9d187221173308ef3b60bd11313234610f8c6c2cc858463008afed47a5d4c34"} err="failed to get container status \"b9d187221173308ef3b60bd11313234610f8c6c2cc858463008afed47a5d4c34\": rpc error: code = NotFound desc = could not find container \"b9d187221173308ef3b60bd11313234610f8c6c2cc858463008afed47a5d4c34\": container with ID starting with b9d187221173308ef3b60bd11313234610f8c6c2cc858463008afed47a5d4c34 not found: ID does not exist" Nov 25 07:07:09 crc kubenswrapper[4482]: I1125 07:07:09.842850 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be5da322-ab33-4deb-8049-91903df11263" path="/var/lib/kubelet/pods/be5da322-ab33-4deb-8049-91903df11263/volumes" Nov 25 07:07:21 crc kubenswrapper[4482]: I1125 07:07:21.293367 4482 generic.go:334] "Generic (PLEG): container finished" podID="874cee1f-6776-46a5-b8bb-bed0bf553194" containerID="d3f4431f12ea8e2a7eac57c8eeb958000f3787f017b795fc317d7988b9ee3906" exitCode=0 Nov 25 07:07:21 crc kubenswrapper[4482]: I1125 07:07:21.293577 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"874cee1f-6776-46a5-b8bb-bed0bf553194","Type":"ContainerDied","Data":"d3f4431f12ea8e2a7eac57c8eeb958000f3787f017b795fc317d7988b9ee3906"} Nov 25 07:07:21 crc kubenswrapper[4482]: I1125 07:07:21.296489 4482 generic.go:334] "Generic (PLEG): container finished" podID="e7ce3d46-19fe-494c-a2ce-44ca82debd20" containerID="29389a4e0fcbcc017f6214ee29ef91f442034f29e7872d67b53946e5c80fa1dc" exitCode=0 Nov 25 07:07:21 crc kubenswrapper[4482]: I1125 07:07:21.296534 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7ce3d46-19fe-494c-a2ce-44ca82debd20","Type":"ContainerDied","Data":"29389a4e0fcbcc017f6214ee29ef91f442034f29e7872d67b53946e5c80fa1dc"} Nov 25 07:07:22 crc kubenswrapper[4482]: I1125 07:07:22.307360 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e7ce3d46-19fe-494c-a2ce-44ca82debd20","Type":"ContainerStarted","Data":"ae746d4524e9812b24ab744a2fc38095c90b4e4609888f7135663846ec352b0e"} Nov 25 07:07:22 crc kubenswrapper[4482]: I1125 07:07:22.308301 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 07:07:22 crc kubenswrapper[4482]: I1125 07:07:22.310997 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"874cee1f-6776-46a5-b8bb-bed0bf553194","Type":"ContainerStarted","Data":"65d74d0b5d9932837100e89444e8e0d71fb92cd30861920c0c7c8ab799c23af8"} Nov 25 07:07:22 crc kubenswrapper[4482]: I1125 07:07:22.311207 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:07:22 crc kubenswrapper[4482]: I1125 07:07:22.331928 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.331917276 podStartE2EDuration="36.331917276s" podCreationTimestamp="2025-11-25 07:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:07:22.32636058 +0000 UTC m=+1216.814591849" watchObservedRunningTime="2025-11-25 07:07:22.331917276 +0000 UTC m=+1216.820148525" Nov 25 07:07:22 crc kubenswrapper[4482]: I1125 07:07:22.349769 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.349762196 podStartE2EDuration="36.349762196s" podCreationTimestamp="2025-11-25 07:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:07:22.348772851 +0000 UTC m=+1216.837004110" watchObservedRunningTime="2025-11-25 07:07:22.349762196 +0000 UTC m=+1216.837993456" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.431906 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp"] Nov 25 07:07:25 crc kubenswrapper[4482]: E1125 07:07:25.433146 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcdb3d0c-8d88-49e0-b213-703b54444699" containerName="dnsmasq-dns" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.433158 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcdb3d0c-8d88-49e0-b213-703b54444699" containerName="dnsmasq-dns" Nov 25 07:07:25 crc kubenswrapper[4482]: E1125 07:07:25.433193 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcdb3d0c-8d88-49e0-b213-703b54444699" containerName="init" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.433198 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcdb3d0c-8d88-49e0-b213-703b54444699" containerName="init" Nov 25 07:07:25 crc kubenswrapper[4482]: E1125 07:07:25.433212 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be5da322-ab33-4deb-8049-91903df11263" containerName="dnsmasq-dns" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.433218 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5da322-ab33-4deb-8049-91903df11263" containerName="dnsmasq-dns" Nov 25 07:07:25 crc kubenswrapper[4482]: E1125 07:07:25.433236 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be5da322-ab33-4deb-8049-91903df11263" containerName="init" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.433241 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5da322-ab33-4deb-8049-91903df11263" containerName="init" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.433419 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="be5da322-ab33-4deb-8049-91903df11263" containerName="dnsmasq-dns" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.433431 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcdb3d0c-8d88-49e0-b213-703b54444699" containerName="dnsmasq-dns" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.433980 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.436046 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.436529 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.438040 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.438152 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.465696 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp"] Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.545427 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.545522 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.545549 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wwls\" (UniqueName: \"kubernetes.io/projected/0d4c9324-af0f-4489-b925-597fbe262153-kube-api-access-9wwls\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.545576 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.646903 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.647012 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.647037 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wwls\" (UniqueName: \"kubernetes.io/projected/0d4c9324-af0f-4489-b925-597fbe262153-kube-api-access-9wwls\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.647065 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.658844 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.658969 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.659058 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.662828 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wwls\" (UniqueName: \"kubernetes.io/projected/0d4c9324-af0f-4489-b925-597fbe262153-kube-api-access-9wwls\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:25 crc kubenswrapper[4482]: I1125 07:07:25.747211 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:07:26 crc kubenswrapper[4482]: I1125 07:07:26.366776 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp"] Nov 25 07:07:26 crc kubenswrapper[4482]: W1125 07:07:26.368502 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d4c9324_af0f_4489_b925_597fbe262153.slice/crio-a96e472011169a12ced2c91ca2644f91e45d0e1e61875918b8619aca46782e64 WatchSource:0}: Error finding container a96e472011169a12ced2c91ca2644f91e45d0e1e61875918b8619aca46782e64: Status 404 returned error can't find the container with id a96e472011169a12ced2c91ca2644f91e45d0e1e61875918b8619aca46782e64 Nov 25 07:07:26 crc kubenswrapper[4482]: I1125 07:07:26.381781 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:07:27 crc kubenswrapper[4482]: I1125 07:07:27.363459 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" event={"ID":"0d4c9324-af0f-4489-b925-597fbe262153","Type":"ContainerStarted","Data":"a96e472011169a12ced2c91ca2644f91e45d0e1e61875918b8619aca46782e64"} Nov 25 07:07:36 crc kubenswrapper[4482]: I1125 07:07:36.678382 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 07:07:36 crc kubenswrapper[4482]: I1125 07:07:36.984359 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:07:37 crc kubenswrapper[4482]: I1125 07:07:37.517936 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" event={"ID":"0d4c9324-af0f-4489-b925-597fbe262153","Type":"ContainerStarted","Data":"27427d64e489e5b3a55f91d23f81a50c0ab3f355cf089d8c8d2492c805c09057"} Nov 25 07:07:37 crc kubenswrapper[4482]: I1125 07:07:37.536346 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" podStartSLOduration=2.051450666 podStartE2EDuration="12.536323529s" podCreationTimestamp="2025-11-25 07:07:25 +0000 UTC" firstStartedPulling="2025-11-25 07:07:26.381108602 +0000 UTC m=+1220.869339860" lastFinishedPulling="2025-11-25 07:07:36.865981463 +0000 UTC m=+1231.354212723" observedRunningTime="2025-11-25 07:07:37.529707106 +0000 UTC m=+1232.017938356" watchObservedRunningTime="2025-11-25 07:07:37.536323529 +0000 UTC m=+1232.024554788" Nov 25 07:08:06 crc kubenswrapper[4482]: I1125 07:08:06.783662 4482 generic.go:334] "Generic (PLEG): container finished" podID="0d4c9324-af0f-4489-b925-597fbe262153" containerID="27427d64e489e5b3a55f91d23f81a50c0ab3f355cf089d8c8d2492c805c09057" exitCode=0 Nov 25 07:08:06 crc kubenswrapper[4482]: I1125 07:08:06.783752 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" event={"ID":"0d4c9324-af0f-4489-b925-597fbe262153","Type":"ContainerDied","Data":"27427d64e489e5b3a55f91d23f81a50c0ab3f355cf089d8c8d2492c805c09057"} Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.129933 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.134963 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-ssh-key\") pod \"0d4c9324-af0f-4489-b925-597fbe262153\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.135076 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-inventory\") pod \"0d4c9324-af0f-4489-b925-597fbe262153\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.135140 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wwls\" (UniqueName: \"kubernetes.io/projected/0d4c9324-af0f-4489-b925-597fbe262153-kube-api-access-9wwls\") pod \"0d4c9324-af0f-4489-b925-597fbe262153\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.135235 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-repo-setup-combined-ca-bundle\") pod \"0d4c9324-af0f-4489-b925-597fbe262153\" (UID: \"0d4c9324-af0f-4489-b925-597fbe262153\") " Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.139373 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0d4c9324-af0f-4489-b925-597fbe262153" (UID: "0d4c9324-af0f-4489-b925-597fbe262153"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.139827 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d4c9324-af0f-4489-b925-597fbe262153-kube-api-access-9wwls" (OuterVolumeSpecName: "kube-api-access-9wwls") pod "0d4c9324-af0f-4489-b925-597fbe262153" (UID: "0d4c9324-af0f-4489-b925-597fbe262153"). InnerVolumeSpecName "kube-api-access-9wwls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.163990 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-inventory" (OuterVolumeSpecName: "inventory") pod "0d4c9324-af0f-4489-b925-597fbe262153" (UID: "0d4c9324-af0f-4489-b925-597fbe262153"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.169251 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0d4c9324-af0f-4489-b925-597fbe262153" (UID: "0d4c9324-af0f-4489-b925-597fbe262153"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.237862 4482 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.237891 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.237901 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d4c9324-af0f-4489-b925-597fbe262153-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.237911 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wwls\" (UniqueName: \"kubernetes.io/projected/0d4c9324-af0f-4489-b925-597fbe262153-kube-api-access-9wwls\") on node \"crc\" DevicePath \"\"" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.803516 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" event={"ID":"0d4c9324-af0f-4489-b925-597fbe262153","Type":"ContainerDied","Data":"a96e472011169a12ced2c91ca2644f91e45d0e1e61875918b8619aca46782e64"} Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.803827 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a96e472011169a12ced2c91ca2644f91e45d0e1e61875918b8619aca46782e64" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.803732 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dkccp" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.886429 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw"] Nov 25 07:08:08 crc kubenswrapper[4482]: E1125 07:08:08.886852 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d4c9324-af0f-4489-b925-597fbe262153" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.886872 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d4c9324-af0f-4489-b925-597fbe262153" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.887075 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d4c9324-af0f-4489-b925-597fbe262153" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.887729 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.889618 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.889721 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.890862 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.893149 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.903386 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw"] Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.947488 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/948466f7-0730-4bec-a806-9c5a8baba624-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qqvpw\" (UID: \"948466f7-0730-4bec-a806-9c5a8baba624\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.947530 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcbsd\" (UniqueName: \"kubernetes.io/projected/948466f7-0730-4bec-a806-9c5a8baba624-kube-api-access-rcbsd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qqvpw\" (UID: \"948466f7-0730-4bec-a806-9c5a8baba624\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:08 crc kubenswrapper[4482]: I1125 07:08:08.947589 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/948466f7-0730-4bec-a806-9c5a8baba624-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qqvpw\" (UID: \"948466f7-0730-4bec-a806-9c5a8baba624\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:09 crc kubenswrapper[4482]: I1125 07:08:09.048667 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/948466f7-0730-4bec-a806-9c5a8baba624-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qqvpw\" (UID: \"948466f7-0730-4bec-a806-9c5a8baba624\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:09 crc kubenswrapper[4482]: I1125 07:08:09.048699 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcbsd\" (UniqueName: \"kubernetes.io/projected/948466f7-0730-4bec-a806-9c5a8baba624-kube-api-access-rcbsd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qqvpw\" (UID: \"948466f7-0730-4bec-a806-9c5a8baba624\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:09 crc kubenswrapper[4482]: I1125 07:08:09.048736 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/948466f7-0730-4bec-a806-9c5a8baba624-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qqvpw\" (UID: \"948466f7-0730-4bec-a806-9c5a8baba624\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:09 crc kubenswrapper[4482]: I1125 07:08:09.052816 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/948466f7-0730-4bec-a806-9c5a8baba624-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qqvpw\" (UID: \"948466f7-0730-4bec-a806-9c5a8baba624\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:09 crc kubenswrapper[4482]: I1125 07:08:09.054591 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/948466f7-0730-4bec-a806-9c5a8baba624-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qqvpw\" (UID: \"948466f7-0730-4bec-a806-9c5a8baba624\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:09 crc kubenswrapper[4482]: I1125 07:08:09.064900 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcbsd\" (UniqueName: \"kubernetes.io/projected/948466f7-0730-4bec-a806-9c5a8baba624-kube-api-access-rcbsd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qqvpw\" (UID: \"948466f7-0730-4bec-a806-9c5a8baba624\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:09 crc kubenswrapper[4482]: I1125 07:08:09.201788 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:09 crc kubenswrapper[4482]: I1125 07:08:09.681781 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw"] Nov 25 07:08:09 crc kubenswrapper[4482]: W1125 07:08:09.683085 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod948466f7_0730_4bec_a806_9c5a8baba624.slice/crio-6698b0e3fe91acf5caab3237adaa744c2927b1c870a327f4ef617335b8ab8eb0 WatchSource:0}: Error finding container 6698b0e3fe91acf5caab3237adaa744c2927b1c870a327f4ef617335b8ab8eb0: Status 404 returned error can't find the container with id 6698b0e3fe91acf5caab3237adaa744c2927b1c870a327f4ef617335b8ab8eb0 Nov 25 07:08:09 crc kubenswrapper[4482]: I1125 07:08:09.813933 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" event={"ID":"948466f7-0730-4bec-a806-9c5a8baba624","Type":"ContainerStarted","Data":"6698b0e3fe91acf5caab3237adaa744c2927b1c870a327f4ef617335b8ab8eb0"} Nov 25 07:08:10 crc kubenswrapper[4482]: I1125 07:08:10.823889 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" event={"ID":"948466f7-0730-4bec-a806-9c5a8baba624","Type":"ContainerStarted","Data":"eaa9cfc57f0b00164d0d662e08b147f3afbc067b14dfc74637dc125a032ca3d5"} Nov 25 07:08:10 crc kubenswrapper[4482]: I1125 07:08:10.841754 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" podStartSLOduration=2.283401072 podStartE2EDuration="2.841737223s" podCreationTimestamp="2025-11-25 07:08:08 +0000 UTC" firstStartedPulling="2025-11-25 07:08:09.685612023 +0000 UTC m=+1264.173843282" lastFinishedPulling="2025-11-25 07:08:10.243948174 +0000 UTC m=+1264.732179433" observedRunningTime="2025-11-25 07:08:10.836914521 +0000 UTC m=+1265.325145780" watchObservedRunningTime="2025-11-25 07:08:10.841737223 +0000 UTC m=+1265.329968483" Nov 25 07:08:12 crc kubenswrapper[4482]: I1125 07:08:12.842078 4482 generic.go:334] "Generic (PLEG): container finished" podID="948466f7-0730-4bec-a806-9c5a8baba624" containerID="eaa9cfc57f0b00164d0d662e08b147f3afbc067b14dfc74637dc125a032ca3d5" exitCode=0 Nov 25 07:08:12 crc kubenswrapper[4482]: I1125 07:08:12.842138 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" event={"ID":"948466f7-0730-4bec-a806-9c5a8baba624","Type":"ContainerDied","Data":"eaa9cfc57f0b00164d0d662e08b147f3afbc067b14dfc74637dc125a032ca3d5"} Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.180152 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.355305 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/948466f7-0730-4bec-a806-9c5a8baba624-ssh-key\") pod \"948466f7-0730-4bec-a806-9c5a8baba624\" (UID: \"948466f7-0730-4bec-a806-9c5a8baba624\") " Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.355451 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcbsd\" (UniqueName: \"kubernetes.io/projected/948466f7-0730-4bec-a806-9c5a8baba624-kube-api-access-rcbsd\") pod \"948466f7-0730-4bec-a806-9c5a8baba624\" (UID: \"948466f7-0730-4bec-a806-9c5a8baba624\") " Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.355666 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/948466f7-0730-4bec-a806-9c5a8baba624-inventory\") pod \"948466f7-0730-4bec-a806-9c5a8baba624\" (UID: \"948466f7-0730-4bec-a806-9c5a8baba624\") " Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.360426 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948466f7-0730-4bec-a806-9c5a8baba624-kube-api-access-rcbsd" (OuterVolumeSpecName: "kube-api-access-rcbsd") pod "948466f7-0730-4bec-a806-9c5a8baba624" (UID: "948466f7-0730-4bec-a806-9c5a8baba624"). InnerVolumeSpecName "kube-api-access-rcbsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.378545 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948466f7-0730-4bec-a806-9c5a8baba624-inventory" (OuterVolumeSpecName: "inventory") pod "948466f7-0730-4bec-a806-9c5a8baba624" (UID: "948466f7-0730-4bec-a806-9c5a8baba624"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.379268 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948466f7-0730-4bec-a806-9c5a8baba624-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "948466f7-0730-4bec-a806-9c5a8baba624" (UID: "948466f7-0730-4bec-a806-9c5a8baba624"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.458346 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/948466f7-0730-4bec-a806-9c5a8baba624-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.458375 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcbsd\" (UniqueName: \"kubernetes.io/projected/948466f7-0730-4bec-a806-9c5a8baba624-kube-api-access-rcbsd\") on node \"crc\" DevicePath \"\"" Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.458388 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/948466f7-0730-4bec-a806-9c5a8baba624-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.857988 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" event={"ID":"948466f7-0730-4bec-a806-9c5a8baba624","Type":"ContainerDied","Data":"6698b0e3fe91acf5caab3237adaa744c2927b1c870a327f4ef617335b8ab8eb0"} Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.858197 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6698b0e3fe91acf5caab3237adaa744c2927b1c870a327f4ef617335b8ab8eb0" Nov 25 07:08:14 crc kubenswrapper[4482]: I1125 07:08:14.858044 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qqvpw" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.254412 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn"] Nov 25 07:08:15 crc kubenswrapper[4482]: E1125 07:08:15.254747 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948466f7-0730-4bec-a806-9c5a8baba624" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.254760 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="948466f7-0730-4bec-a806-9c5a8baba624" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.254931 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="948466f7-0730-4bec-a806-9c5a8baba624" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.255458 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.257717 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.257868 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.258127 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.258414 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.273627 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn"] Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.372467 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.372539 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.372617 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.372654 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp7zg\" (UniqueName: \"kubernetes.io/projected/39a79591-2e93-478b-8091-e4ea6dca13b1-kube-api-access-bp7zg\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.474082 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.474146 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.474184 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp7zg\" (UniqueName: \"kubernetes.io/projected/39a79591-2e93-478b-8091-e4ea6dca13b1-kube-api-access-bp7zg\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.474265 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.479627 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.479624 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.479700 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.486984 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp7zg\" (UniqueName: \"kubernetes.io/projected/39a79591-2e93-478b-8091-e4ea6dca13b1-kube-api-access-bp7zg\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:15 crc kubenswrapper[4482]: I1125 07:08:15.573336 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:08:16 crc kubenswrapper[4482]: I1125 07:08:16.030147 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn"] Nov 25 07:08:16 crc kubenswrapper[4482]: I1125 07:08:16.873767 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" event={"ID":"39a79591-2e93-478b-8091-e4ea6dca13b1","Type":"ContainerStarted","Data":"f5948215d23d4ef6a772c1f328055595057e8b4b5793f8964b78579b8b885b4a"} Nov 25 07:08:16 crc kubenswrapper[4482]: I1125 07:08:16.874068 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" event={"ID":"39a79591-2e93-478b-8091-e4ea6dca13b1","Type":"ContainerStarted","Data":"998b58dd270595e4db07847c06a0118750806a26eb9106770959317485f42cf6"} Nov 25 07:08:16 crc kubenswrapper[4482]: I1125 07:08:16.888633 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" podStartSLOduration=1.360626731 podStartE2EDuration="1.888622585s" podCreationTimestamp="2025-11-25 07:08:15 +0000 UTC" firstStartedPulling="2025-11-25 07:08:16.033629487 +0000 UTC m=+1270.521860746" lastFinishedPulling="2025-11-25 07:08:16.561625341 +0000 UTC m=+1271.049856600" observedRunningTime="2025-11-25 07:08:16.887496742 +0000 UTC m=+1271.375728001" watchObservedRunningTime="2025-11-25 07:08:16.888622585 +0000 UTC m=+1271.376853844" Nov 25 07:08:37 crc kubenswrapper[4482]: I1125 07:08:37.957684 4482 scope.go:117] "RemoveContainer" containerID="ac5c5842cbfbf2124176f2c6e6276d798b5f1f4b00838ac3bb8ab115496f661b" Nov 25 07:08:37 crc kubenswrapper[4482]: I1125 07:08:37.981086 4482 scope.go:117] "RemoveContainer" containerID="439205d16de18c8a65ebb873a29d16d2b37809ab701037fc2a36954b008972d6" Nov 25 07:08:39 crc kubenswrapper[4482]: I1125 07:08:39.117774 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:08:39 crc kubenswrapper[4482]: I1125 07:08:39.117970 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:09:09 crc kubenswrapper[4482]: I1125 07:09:09.117350 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:09:09 crc kubenswrapper[4482]: I1125 07:09:09.117733 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:09:38 crc kubenswrapper[4482]: I1125 07:09:38.058437 4482 scope.go:117] "RemoveContainer" containerID="00540160539f48823ad922bb8b446774532ea892d3bbad9145b32cafa55fc6ea" Nov 25 07:09:39 crc kubenswrapper[4482]: I1125 07:09:39.117819 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:09:39 crc kubenswrapper[4482]: I1125 07:09:39.117871 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:09:39 crc kubenswrapper[4482]: I1125 07:09:39.117910 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 07:09:39 crc kubenswrapper[4482]: I1125 07:09:39.118401 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63bdd9f0fce14d34b7bf553de17b7114201d3cbf1828eb48f5089e09d1c6eec0"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:09:39 crc kubenswrapper[4482]: I1125 07:09:39.118452 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://63bdd9f0fce14d34b7bf553de17b7114201d3cbf1828eb48f5089e09d1c6eec0" gracePeriod=600 Nov 25 07:09:39 crc kubenswrapper[4482]: I1125 07:09:39.497207 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="63bdd9f0fce14d34b7bf553de17b7114201d3cbf1828eb48f5089e09d1c6eec0" exitCode=0 Nov 25 07:09:39 crc kubenswrapper[4482]: I1125 07:09:39.497276 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"63bdd9f0fce14d34b7bf553de17b7114201d3cbf1828eb48f5089e09d1c6eec0"} Nov 25 07:09:39 crc kubenswrapper[4482]: I1125 07:09:39.497413 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77"} Nov 25 07:09:39 crc kubenswrapper[4482]: I1125 07:09:39.497436 4482 scope.go:117] "RemoveContainer" containerID="74ac51368ca9a85524d27db3fb42de85573ff45ef8883e47eb5fe2759d039e48" Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.085337 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z8gc8"] Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.092116 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.132072 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z8gc8"] Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.209859 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f644fc96-0851-4a33-8ac8-a2492fe1ba50-catalog-content\") pod \"community-operators-z8gc8\" (UID: \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\") " pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.209916 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f644fc96-0851-4a33-8ac8-a2492fe1ba50-utilities\") pod \"community-operators-z8gc8\" (UID: \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\") " pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.209974 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm7dt\" (UniqueName: \"kubernetes.io/projected/f644fc96-0851-4a33-8ac8-a2492fe1ba50-kube-api-access-pm7dt\") pod \"community-operators-z8gc8\" (UID: \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\") " pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.311853 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f644fc96-0851-4a33-8ac8-a2492fe1ba50-catalog-content\") pod \"community-operators-z8gc8\" (UID: \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\") " pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.312616 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f644fc96-0851-4a33-8ac8-a2492fe1ba50-utilities\") pod \"community-operators-z8gc8\" (UID: \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\") " pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.312626 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f644fc96-0851-4a33-8ac8-a2492fe1ba50-catalog-content\") pod \"community-operators-z8gc8\" (UID: \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\") " pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.312836 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f644fc96-0851-4a33-8ac8-a2492fe1ba50-utilities\") pod \"community-operators-z8gc8\" (UID: \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\") " pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.312992 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm7dt\" (UniqueName: \"kubernetes.io/projected/f644fc96-0851-4a33-8ac8-a2492fe1ba50-kube-api-access-pm7dt\") pod \"community-operators-z8gc8\" (UID: \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\") " pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.335475 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm7dt\" (UniqueName: \"kubernetes.io/projected/f644fc96-0851-4a33-8ac8-a2492fe1ba50-kube-api-access-pm7dt\") pod \"community-operators-z8gc8\" (UID: \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\") " pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.409561 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:09:55 crc kubenswrapper[4482]: I1125 07:09:55.827684 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z8gc8"] Nov 25 07:09:56 crc kubenswrapper[4482]: I1125 07:09:56.658314 4482 generic.go:334] "Generic (PLEG): container finished" podID="f644fc96-0851-4a33-8ac8-a2492fe1ba50" containerID="1e89e7f0f557ef01f1de0c8a6b19d6b15b94a153faff1d49a24642e9ef30b4fa" exitCode=0 Nov 25 07:09:56 crc kubenswrapper[4482]: I1125 07:09:56.658985 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8gc8" event={"ID":"f644fc96-0851-4a33-8ac8-a2492fe1ba50","Type":"ContainerDied","Data":"1e89e7f0f557ef01f1de0c8a6b19d6b15b94a153faff1d49a24642e9ef30b4fa"} Nov 25 07:09:56 crc kubenswrapper[4482]: I1125 07:09:56.659751 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8gc8" event={"ID":"f644fc96-0851-4a33-8ac8-a2492fe1ba50","Type":"ContainerStarted","Data":"a07c4af48439312800f468bd74405b30da1330c8424a8dd38d92006bd6491a28"} Nov 25 07:09:57 crc kubenswrapper[4482]: I1125 07:09:57.672103 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8gc8" event={"ID":"f644fc96-0851-4a33-8ac8-a2492fe1ba50","Type":"ContainerStarted","Data":"ae80206d47b756b0f74b4c90abd67c7dcb4836a61288818f3f2dd4b8b603be40"} Nov 25 07:09:58 crc kubenswrapper[4482]: I1125 07:09:58.685421 4482 generic.go:334] "Generic (PLEG): container finished" podID="f644fc96-0851-4a33-8ac8-a2492fe1ba50" containerID="ae80206d47b756b0f74b4c90abd67c7dcb4836a61288818f3f2dd4b8b603be40" exitCode=0 Nov 25 07:09:58 crc kubenswrapper[4482]: I1125 07:09:58.685630 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8gc8" event={"ID":"f644fc96-0851-4a33-8ac8-a2492fe1ba50","Type":"ContainerDied","Data":"ae80206d47b756b0f74b4c90abd67c7dcb4836a61288818f3f2dd4b8b603be40"} Nov 25 07:09:59 crc kubenswrapper[4482]: I1125 07:09:59.697773 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8gc8" event={"ID":"f644fc96-0851-4a33-8ac8-a2492fe1ba50","Type":"ContainerStarted","Data":"46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b"} Nov 25 07:09:59 crc kubenswrapper[4482]: I1125 07:09:59.717947 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z8gc8" podStartSLOduration=2.135952189 podStartE2EDuration="4.717923427s" podCreationTimestamp="2025-11-25 07:09:55 +0000 UTC" firstStartedPulling="2025-11-25 07:09:56.661023533 +0000 UTC m=+1371.149254791" lastFinishedPulling="2025-11-25 07:09:59.24299477 +0000 UTC m=+1373.731226029" observedRunningTime="2025-11-25 07:09:59.717125553 +0000 UTC m=+1374.205356812" watchObservedRunningTime="2025-11-25 07:09:59.717923427 +0000 UTC m=+1374.206154676" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.013754 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.016738 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.032579 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.041775 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.045998 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.118292 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67c193ad-3763-483b-bcb2-48a26ad663bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67c193ad-3763-483b-bcb2-48a26ad663bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.118430 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67c193ad-3763-483b-bcb2-48a26ad663bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67c193ad-3763-483b-bcb2-48a26ad663bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.220922 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67c193ad-3763-483b-bcb2-48a26ad663bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67c193ad-3763-483b-bcb2-48a26ad663bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.221048 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67c193ad-3763-483b-bcb2-48a26ad663bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67c193ad-3763-483b-bcb2-48a26ad663bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.221205 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67c193ad-3763-483b-bcb2-48a26ad663bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67c193ad-3763-483b-bcb2-48a26ad663bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.243445 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67c193ad-3763-483b-bcb2-48a26ad663bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67c193ad-3763-483b-bcb2-48a26ad663bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.346623 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.409756 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.409831 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.476161 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.784124 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.823992 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 07:10:05 crc kubenswrapper[4482]: I1125 07:10:05.859568 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z8gc8"] Nov 25 07:10:06 crc kubenswrapper[4482]: I1125 07:10:06.763634 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"67c193ad-3763-483b-bcb2-48a26ad663bb","Type":"ContainerStarted","Data":"e5a6e3999c397a2521698ef21be6b74e2c8127be373d15d61d45f34252956b4d"} Nov 25 07:10:06 crc kubenswrapper[4482]: I1125 07:10:06.763996 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"67c193ad-3763-483b-bcb2-48a26ad663bb","Type":"ContainerStarted","Data":"610b16e3e211ddea0150cfa1e096a166e215a856b1f008d8d6405cdda31bec07"} Nov 25 07:10:06 crc kubenswrapper[4482]: I1125 07:10:06.786774 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.78676137 podStartE2EDuration="2.78676137s" podCreationTimestamp="2025-11-25 07:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:10:06.783891439 +0000 UTC m=+1381.272122698" watchObservedRunningTime="2025-11-25 07:10:06.78676137 +0000 UTC m=+1381.274992628" Nov 25 07:10:07 crc kubenswrapper[4482]: I1125 07:10:07.781385 4482 generic.go:334] "Generic (PLEG): container finished" podID="67c193ad-3763-483b-bcb2-48a26ad663bb" containerID="e5a6e3999c397a2521698ef21be6b74e2c8127be373d15d61d45f34252956b4d" exitCode=0 Nov 25 07:10:07 crc kubenswrapper[4482]: I1125 07:10:07.781618 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"67c193ad-3763-483b-bcb2-48a26ad663bb","Type":"ContainerDied","Data":"e5a6e3999c397a2521698ef21be6b74e2c8127be373d15d61d45f34252956b4d"} Nov 25 07:10:07 crc kubenswrapper[4482]: I1125 07:10:07.781705 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z8gc8" podUID="f644fc96-0851-4a33-8ac8-a2492fe1ba50" containerName="registry-server" containerID="cri-o://46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b" gracePeriod=2 Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.174970 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.290154 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm7dt\" (UniqueName: \"kubernetes.io/projected/f644fc96-0851-4a33-8ac8-a2492fe1ba50-kube-api-access-pm7dt\") pod \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\" (UID: \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\") " Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.290757 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f644fc96-0851-4a33-8ac8-a2492fe1ba50-catalog-content\") pod \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\" (UID: \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\") " Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.290888 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f644fc96-0851-4a33-8ac8-a2492fe1ba50-utilities\") pod \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\" (UID: \"f644fc96-0851-4a33-8ac8-a2492fe1ba50\") " Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.291609 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f644fc96-0851-4a33-8ac8-a2492fe1ba50-utilities" (OuterVolumeSpecName: "utilities") pod "f644fc96-0851-4a33-8ac8-a2492fe1ba50" (UID: "f644fc96-0851-4a33-8ac8-a2492fe1ba50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.292204 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f644fc96-0851-4a33-8ac8-a2492fe1ba50-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.296411 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f644fc96-0851-4a33-8ac8-a2492fe1ba50-kube-api-access-pm7dt" (OuterVolumeSpecName: "kube-api-access-pm7dt") pod "f644fc96-0851-4a33-8ac8-a2492fe1ba50" (UID: "f644fc96-0851-4a33-8ac8-a2492fe1ba50"). InnerVolumeSpecName "kube-api-access-pm7dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.332657 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f644fc96-0851-4a33-8ac8-a2492fe1ba50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f644fc96-0851-4a33-8ac8-a2492fe1ba50" (UID: "f644fc96-0851-4a33-8ac8-a2492fe1ba50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.395286 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f644fc96-0851-4a33-8ac8-a2492fe1ba50-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.395324 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm7dt\" (UniqueName: \"kubernetes.io/projected/f644fc96-0851-4a33-8ac8-a2492fe1ba50-kube-api-access-pm7dt\") on node \"crc\" DevicePath \"\"" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.796260 4482 generic.go:334] "Generic (PLEG): container finished" podID="f644fc96-0851-4a33-8ac8-a2492fe1ba50" containerID="46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b" exitCode=0 Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.796942 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8gc8" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.798290 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8gc8" event={"ID":"f644fc96-0851-4a33-8ac8-a2492fe1ba50","Type":"ContainerDied","Data":"46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b"} Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.798349 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8gc8" event={"ID":"f644fc96-0851-4a33-8ac8-a2492fe1ba50","Type":"ContainerDied","Data":"a07c4af48439312800f468bd74405b30da1330c8424a8dd38d92006bd6491a28"} Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.798394 4482 scope.go:117] "RemoveContainer" containerID="46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.838306 4482 scope.go:117] "RemoveContainer" containerID="ae80206d47b756b0f74b4c90abd67c7dcb4836a61288818f3f2dd4b8b603be40" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.838568 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z8gc8"] Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.850435 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z8gc8"] Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.855778 4482 scope.go:117] "RemoveContainer" containerID="1e89e7f0f557ef01f1de0c8a6b19d6b15b94a153faff1d49a24642e9ef30b4fa" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.904312 4482 scope.go:117] "RemoveContainer" containerID="46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b" Nov 25 07:10:08 crc kubenswrapper[4482]: E1125 07:10:08.911871 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b\": container with ID starting with 46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b not found: ID does not exist" containerID="46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.911905 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b"} err="failed to get container status \"46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b\": rpc error: code = NotFound desc = could not find container \"46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b\": container with ID starting with 46838e19de4f4f4250bbb9e2fe32670221e4ee110582046bf2bd60254da17d6b not found: ID does not exist" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.911928 4482 scope.go:117] "RemoveContainer" containerID="ae80206d47b756b0f74b4c90abd67c7dcb4836a61288818f3f2dd4b8b603be40" Nov 25 07:10:08 crc kubenswrapper[4482]: E1125 07:10:08.914348 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae80206d47b756b0f74b4c90abd67c7dcb4836a61288818f3f2dd4b8b603be40\": container with ID starting with ae80206d47b756b0f74b4c90abd67c7dcb4836a61288818f3f2dd4b8b603be40 not found: ID does not exist" containerID="ae80206d47b756b0f74b4c90abd67c7dcb4836a61288818f3f2dd4b8b603be40" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.914372 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae80206d47b756b0f74b4c90abd67c7dcb4836a61288818f3f2dd4b8b603be40"} err="failed to get container status \"ae80206d47b756b0f74b4c90abd67c7dcb4836a61288818f3f2dd4b8b603be40\": rpc error: code = NotFound desc = could not find container \"ae80206d47b756b0f74b4c90abd67c7dcb4836a61288818f3f2dd4b8b603be40\": container with ID starting with ae80206d47b756b0f74b4c90abd67c7dcb4836a61288818f3f2dd4b8b603be40 not found: ID does not exist" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.914388 4482 scope.go:117] "RemoveContainer" containerID="1e89e7f0f557ef01f1de0c8a6b19d6b15b94a153faff1d49a24642e9ef30b4fa" Nov 25 07:10:08 crc kubenswrapper[4482]: E1125 07:10:08.916562 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e89e7f0f557ef01f1de0c8a6b19d6b15b94a153faff1d49a24642e9ef30b4fa\": container with ID starting with 1e89e7f0f557ef01f1de0c8a6b19d6b15b94a153faff1d49a24642e9ef30b4fa not found: ID does not exist" containerID="1e89e7f0f557ef01f1de0c8a6b19d6b15b94a153faff1d49a24642e9ef30b4fa" Nov 25 07:10:08 crc kubenswrapper[4482]: I1125 07:10:08.916585 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e89e7f0f557ef01f1de0c8a6b19d6b15b94a153faff1d49a24642e9ef30b4fa"} err="failed to get container status \"1e89e7f0f557ef01f1de0c8a6b19d6b15b94a153faff1d49a24642e9ef30b4fa\": rpc error: code = NotFound desc = could not find container \"1e89e7f0f557ef01f1de0c8a6b19d6b15b94a153faff1d49a24642e9ef30b4fa\": container with ID starting with 1e89e7f0f557ef01f1de0c8a6b19d6b15b94a153faff1d49a24642e9ef30b4fa not found: ID does not exist" Nov 25 07:10:09 crc kubenswrapper[4482]: I1125 07:10:09.112235 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 07:10:09 crc kubenswrapper[4482]: I1125 07:10:09.211959 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67c193ad-3763-483b-bcb2-48a26ad663bb-kubelet-dir\") pod \"67c193ad-3763-483b-bcb2-48a26ad663bb\" (UID: \"67c193ad-3763-483b-bcb2-48a26ad663bb\") " Nov 25 07:10:09 crc kubenswrapper[4482]: I1125 07:10:09.212133 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67c193ad-3763-483b-bcb2-48a26ad663bb-kube-api-access\") pod \"67c193ad-3763-483b-bcb2-48a26ad663bb\" (UID: \"67c193ad-3763-483b-bcb2-48a26ad663bb\") " Nov 25 07:10:09 crc kubenswrapper[4482]: I1125 07:10:09.212265 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67c193ad-3763-483b-bcb2-48a26ad663bb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "67c193ad-3763-483b-bcb2-48a26ad663bb" (UID: "67c193ad-3763-483b-bcb2-48a26ad663bb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:10:09 crc kubenswrapper[4482]: I1125 07:10:09.212795 4482 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67c193ad-3763-483b-bcb2-48a26ad663bb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 07:10:09 crc kubenswrapper[4482]: I1125 07:10:09.218938 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c193ad-3763-483b-bcb2-48a26ad663bb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "67c193ad-3763-483b-bcb2-48a26ad663bb" (UID: "67c193ad-3763-483b-bcb2-48a26ad663bb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:10:09 crc kubenswrapper[4482]: I1125 07:10:09.315944 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67c193ad-3763-483b-bcb2-48a26ad663bb-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 07:10:09 crc kubenswrapper[4482]: I1125 07:10:09.808767 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"67c193ad-3763-483b-bcb2-48a26ad663bb","Type":"ContainerDied","Data":"610b16e3e211ddea0150cfa1e096a166e215a856b1f008d8d6405cdda31bec07"} Nov 25 07:10:09 crc kubenswrapper[4482]: I1125 07:10:09.808824 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="610b16e3e211ddea0150cfa1e096a166e215a856b1f008d8d6405cdda31bec07" Nov 25 07:10:09 crc kubenswrapper[4482]: I1125 07:10:09.808853 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 07:10:09 crc kubenswrapper[4482]: I1125 07:10:09.840438 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f644fc96-0851-4a33-8ac8-a2492fe1ba50" path="/var/lib/kubelet/pods/f644fc96-0851-4a33-8ac8-a2492fe1ba50/volumes" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.012220 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 07:10:12 crc kubenswrapper[4482]: E1125 07:10:12.012922 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f644fc96-0851-4a33-8ac8-a2492fe1ba50" containerName="registry-server" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.012938 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f644fc96-0851-4a33-8ac8-a2492fe1ba50" containerName="registry-server" Nov 25 07:10:12 crc kubenswrapper[4482]: E1125 07:10:12.012955 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c193ad-3763-483b-bcb2-48a26ad663bb" containerName="pruner" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.012961 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c193ad-3763-483b-bcb2-48a26ad663bb" containerName="pruner" Nov 25 07:10:12 crc kubenswrapper[4482]: E1125 07:10:12.012977 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f644fc96-0851-4a33-8ac8-a2492fe1ba50" containerName="extract-content" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.012983 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f644fc96-0851-4a33-8ac8-a2492fe1ba50" containerName="extract-content" Nov 25 07:10:12 crc kubenswrapper[4482]: E1125 07:10:12.013000 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f644fc96-0851-4a33-8ac8-a2492fe1ba50" containerName="extract-utilities" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.013008 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f644fc96-0851-4a33-8ac8-a2492fe1ba50" containerName="extract-utilities" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.013202 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f644fc96-0851-4a33-8ac8-a2492fe1ba50" containerName="registry-server" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.013226 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c193ad-3763-483b-bcb2-48a26ad663bb" containerName="pruner" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.013920 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.019001 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.020517 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.020896 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.073721 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ac6721d-3577-4cc2-876e-64a829e86b2b-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ac6721d-3577-4cc2-876e-64a829e86b2b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.073965 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ac6721d-3577-4cc2-876e-64a829e86b2b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ac6721d-3577-4cc2-876e-64a829e86b2b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.074040 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ac6721d-3577-4cc2-876e-64a829e86b2b-var-lock\") pod \"installer-9-crc\" (UID: \"2ac6721d-3577-4cc2-876e-64a829e86b2b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.174803 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ac6721d-3577-4cc2-876e-64a829e86b2b-var-lock\") pod \"installer-9-crc\" (UID: \"2ac6721d-3577-4cc2-876e-64a829e86b2b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.174914 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ac6721d-3577-4cc2-876e-64a829e86b2b-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ac6721d-3577-4cc2-876e-64a829e86b2b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.174963 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ac6721d-3577-4cc2-876e-64a829e86b2b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ac6721d-3577-4cc2-876e-64a829e86b2b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.175035 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ac6721d-3577-4cc2-876e-64a829e86b2b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ac6721d-3577-4cc2-876e-64a829e86b2b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.175070 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ac6721d-3577-4cc2-876e-64a829e86b2b-var-lock\") pod \"installer-9-crc\" (UID: \"2ac6721d-3577-4cc2-876e-64a829e86b2b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.192826 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ac6721d-3577-4cc2-876e-64a829e86b2b-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ac6721d-3577-4cc2-876e-64a829e86b2b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.328254 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.756359 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 07:10:12 crc kubenswrapper[4482]: I1125 07:10:12.835656 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ac6721d-3577-4cc2-876e-64a829e86b2b","Type":"ContainerStarted","Data":"d1a5fb5e0e3518c8883a5e6ab75f2a86f755001cc08f72ce9c6a48b33db06ead"} Nov 25 07:10:13 crc kubenswrapper[4482]: I1125 07:10:13.847077 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ac6721d-3577-4cc2-876e-64a829e86b2b","Type":"ContainerStarted","Data":"b1cfbb9eb7312788126df0c65af63e6ae34c36bec5aa3389c9a001ebd22733cf"} Nov 25 07:10:13 crc kubenswrapper[4482]: I1125 07:10:13.868311 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.868294758 podStartE2EDuration="2.868294758s" podCreationTimestamp="2025-11-25 07:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:10:13.866274629 +0000 UTC m=+1388.354505877" watchObservedRunningTime="2025-11-25 07:10:13.868294758 +0000 UTC m=+1388.356526016" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.813411 4482 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.814919 4482 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815090 4482 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815096 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: E1125 07:10:50.815342 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815359 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 07:10:50 crc kubenswrapper[4482]: E1125 07:10:50.815369 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815375 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 07:10:50 crc kubenswrapper[4482]: E1125 07:10:50.815389 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815395 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 07:10:50 crc kubenswrapper[4482]: E1125 07:10:50.815403 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815407 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 07:10:50 crc kubenswrapper[4482]: E1125 07:10:50.815422 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815427 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 07:10:50 crc kubenswrapper[4482]: E1125 07:10:50.815436 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815443 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 07:10:50 crc kubenswrapper[4482]: E1125 07:10:50.815454 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815460 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815626 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59" gracePeriod=15 Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815745 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705" gracePeriod=15 Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815751 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815830 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815859 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815871 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.815878 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.816040 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560" gracePeriod=15 Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.816162 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b" gracePeriod=15 Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.816940 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8" gracePeriod=15 Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.819633 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.819948 4482 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.854196 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.854230 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.854249 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.854271 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.854317 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.854352 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.854403 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.854483 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.857784 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956084 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956148 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956180 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956198 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956212 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956242 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956265 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956308 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956376 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956407 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956428 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956444 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956464 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956481 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956498 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:50 crc kubenswrapper[4482]: I1125 07:10:50.956517 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:51 crc kubenswrapper[4482]: I1125 07:10:51.130652 4482 generic.go:334] "Generic (PLEG): container finished" podID="2ac6721d-3577-4cc2-876e-64a829e86b2b" containerID="b1cfbb9eb7312788126df0c65af63e6ae34c36bec5aa3389c9a001ebd22733cf" exitCode=0 Nov 25 07:10:51 crc kubenswrapper[4482]: I1125 07:10:51.130734 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ac6721d-3577-4cc2-876e-64a829e86b2b","Type":"ContainerDied","Data":"b1cfbb9eb7312788126df0c65af63e6ae34c36bec5aa3389c9a001ebd22733cf"} Nov 25 07:10:51 crc kubenswrapper[4482]: I1125 07:10:51.133214 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:51 crc kubenswrapper[4482]: I1125 07:10:51.133638 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:51 crc kubenswrapper[4482]: I1125 07:10:51.134861 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 07:10:51 crc kubenswrapper[4482]: I1125 07:10:51.137349 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 07:10:51 crc kubenswrapper[4482]: I1125 07:10:51.138579 4482 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705" exitCode=0 Nov 25 07:10:51 crc kubenswrapper[4482]: I1125 07:10:51.138604 4482 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59" exitCode=0 Nov 25 07:10:51 crc kubenswrapper[4482]: I1125 07:10:51.138612 4482 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560" exitCode=0 Nov 25 07:10:51 crc kubenswrapper[4482]: I1125 07:10:51.138619 4482 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b" exitCode=2 Nov 25 07:10:51 crc kubenswrapper[4482]: I1125 07:10:51.138658 4482 scope.go:117] "RemoveContainer" containerID="5c19b5979563b857f6782aab277c08a0c96260be5546e4202f51b72f2138599f" Nov 25 07:10:51 crc kubenswrapper[4482]: I1125 07:10:51.147048 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:10:51 crc kubenswrapper[4482]: E1125 07:10:51.171901 4482 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.26.133:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b2e5bdf68fed1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 07:10:51.170782929 +0000 UTC m=+1425.659014188,LastTimestamp:2025-11-25 07:10:51.170782929 +0000 UTC m=+1425.659014188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.149200 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.152053 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8cfca3f69cb75a4b54631dcbff6934041cade0206b01ae122006f00eac358bc2"} Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.152105 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"26d7463a82fcee34ce88c0c1497fc3cf0c01711b7c414d2bfcfaa8a258e997d4"} Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.152585 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.152942 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.371224 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.371797 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.372125 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.479273 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ac6721d-3577-4cc2-876e-64a829e86b2b-var-lock\") pod \"2ac6721d-3577-4cc2-876e-64a829e86b2b\" (UID: \"2ac6721d-3577-4cc2-876e-64a829e86b2b\") " Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.479335 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ac6721d-3577-4cc2-876e-64a829e86b2b-kube-api-access\") pod \"2ac6721d-3577-4cc2-876e-64a829e86b2b\" (UID: \"2ac6721d-3577-4cc2-876e-64a829e86b2b\") " Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.479407 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ac6721d-3577-4cc2-876e-64a829e86b2b-var-lock" (OuterVolumeSpecName: "var-lock") pod "2ac6721d-3577-4cc2-876e-64a829e86b2b" (UID: "2ac6721d-3577-4cc2-876e-64a829e86b2b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.479553 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ac6721d-3577-4cc2-876e-64a829e86b2b-kubelet-dir\") pod \"2ac6721d-3577-4cc2-876e-64a829e86b2b\" (UID: \"2ac6721d-3577-4cc2-876e-64a829e86b2b\") " Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.479579 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ac6721d-3577-4cc2-876e-64a829e86b2b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2ac6721d-3577-4cc2-876e-64a829e86b2b" (UID: "2ac6721d-3577-4cc2-876e-64a829e86b2b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.479944 4482 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ac6721d-3577-4cc2-876e-64a829e86b2b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.479959 4482 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ac6721d-3577-4cc2-876e-64a829e86b2b-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.484992 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac6721d-3577-4cc2-876e-64a829e86b2b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2ac6721d-3577-4cc2-876e-64a829e86b2b" (UID: "2ac6721d-3577-4cc2-876e-64a829e86b2b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:10:52 crc kubenswrapper[4482]: I1125 07:10:52.580824 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ac6721d-3577-4cc2-876e-64a829e86b2b-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.162349 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.163551 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ac6721d-3577-4cc2-876e-64a829e86b2b","Type":"ContainerDied","Data":"d1a5fb5e0e3518c8883a5e6ab75f2a86f755001cc08f72ce9c6a48b33db06ead"} Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.163597 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1a5fb5e0e3518c8883a5e6ab75f2a86f755001cc08f72ce9c6a48b33db06ead" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.174299 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.174663 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.704552 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.705913 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.706749 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.707757 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.708360 4482 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.904320 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.904431 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.904755 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.904779 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.904782 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.904811 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.905527 4482 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.905546 4482 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 07:10:53 crc kubenswrapper[4482]: I1125 07:10:53.905574 4482 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.171450 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.172035 4482 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8" exitCode=0 Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.172087 4482 scope.go:117] "RemoveContainer" containerID="efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.172227 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.179296 4482 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.179582 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.179939 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.186741 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.187222 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.187465 4482 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.204598 4482 scope.go:117] "RemoveContainer" containerID="23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.229798 4482 scope.go:117] "RemoveContainer" containerID="febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.250451 4482 scope.go:117] "RemoveContainer" containerID="c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.289441 4482 scope.go:117] "RemoveContainer" containerID="7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.316879 4482 scope.go:117] "RemoveContainer" containerID="4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.343085 4482 scope.go:117] "RemoveContainer" containerID="efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.343573 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\": container with ID starting with efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705 not found: ID does not exist" containerID="efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.343604 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705"} err="failed to get container status \"efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\": rpc error: code = NotFound desc = could not find container \"efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705\": container with ID starting with efa8e597ce5e7858c7e4031bd47a1483a46e18e9df448aea92af74390d7ba705 not found: ID does not exist" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.343624 4482 scope.go:117] "RemoveContainer" containerID="23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.343943 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\": container with ID starting with 23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59 not found: ID does not exist" containerID="23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.344011 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59"} err="failed to get container status \"23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\": rpc error: code = NotFound desc = could not find container \"23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59\": container with ID starting with 23e6520522e408507d0304c5f7e46cd469ad8e74e88d2e89e7859e6660a95e59 not found: ID does not exist" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.344067 4482 scope.go:117] "RemoveContainer" containerID="febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.344524 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\": container with ID starting with febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560 not found: ID does not exist" containerID="febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.344565 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560"} err="failed to get container status \"febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\": rpc error: code = NotFound desc = could not find container \"febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560\": container with ID starting with febf07c06033cead3ab974fa78729d98eab2f080ac26939776429dbf8bb83560 not found: ID does not exist" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.344579 4482 scope.go:117] "RemoveContainer" containerID="c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.345011 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\": container with ID starting with c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b not found: ID does not exist" containerID="c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.345065 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b"} err="failed to get container status \"c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\": rpc error: code = NotFound desc = could not find container \"c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b\": container with ID starting with c41e45cf2c097bcca9434e41a37a16b805663fc2c22bff3d031ca93de71c3c9b not found: ID does not exist" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.345093 4482 scope.go:117] "RemoveContainer" containerID="7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.347211 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\": container with ID starting with 7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8 not found: ID does not exist" containerID="7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.347239 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8"} err="failed to get container status \"7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\": rpc error: code = NotFound desc = could not find container \"7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8\": container with ID starting with 7438f75c61a0731c5402ecd608a961b04fcc5edd0e5ce9e2f3ec169f4739adf8 not found: ID does not exist" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.347253 4482 scope.go:117] "RemoveContainer" containerID="4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.347484 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\": container with ID starting with 4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a not found: ID does not exist" containerID="4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.347505 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a"} err="failed to get container status \"4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\": rpc error: code = NotFound desc = could not find container \"4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a\": container with ID starting with 4ea644f3d8974feb7198c7ae704fe95978c137733e7658bfa29f594da25d201a not found: ID does not exist" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.571087 4482 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.571695 4482 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.572699 4482 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.573066 4482 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.573593 4482 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:54 crc kubenswrapper[4482]: I1125 07:10:54.573629 4482 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.574597 4482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" interval="200ms" Nov 25 07:10:54 crc kubenswrapper[4482]: E1125 07:10:54.776086 4482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" interval="400ms" Nov 25 07:10:55 crc kubenswrapper[4482]: E1125 07:10:55.178442 4482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" interval="800ms" Nov 25 07:10:55 crc kubenswrapper[4482]: I1125 07:10:55.835711 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:55 crc kubenswrapper[4482]: I1125 07:10:55.836242 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:55 crc kubenswrapper[4482]: I1125 07:10:55.836418 4482 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:10:55 crc kubenswrapper[4482]: I1125 07:10:55.842233 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 25 07:10:55 crc kubenswrapper[4482]: E1125 07:10:55.980080 4482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" interval="1.6s" Nov 25 07:10:56 crc kubenswrapper[4482]: I1125 07:10:56.863190 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="1a79608b-f242-45d3-aa13-73c0d7bfd626" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 07:10:57 crc kubenswrapper[4482]: E1125 07:10:57.581325 4482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" interval="3.2s" Nov 25 07:10:58 crc kubenswrapper[4482]: E1125 07:10:58.503734 4482 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.26.133:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b2e5bdf68fed1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 07:10:51.170782929 +0000 UTC m=+1425.659014188,LastTimestamp:2025-11-25 07:10:51.170782929 +0000 UTC m=+1425.659014188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 07:10:59 crc kubenswrapper[4482]: E1125 07:10:59.927816 4482 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0\": dial tcp 192.168.26.133:6443: connect: connection refused" pod="openstack/ovsdbserver-nb-0" volumeName="ovndbcluster-nb-etc-ovn" Nov 25 07:11:00 crc kubenswrapper[4482]: E1125 07:11:00.782201 4482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.133:6443: connect: connection refused" interval="6.4s" Nov 25 07:11:02 crc kubenswrapper[4482]: I1125 07:11:02.248431 4482 generic.go:334] "Generic (PLEG): container finished" podID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" containerID="3fedc62076db9368642d2882fd4055597903be784417326620b567b4d622fa8d" exitCode=1 Nov 25 07:11:02 crc kubenswrapper[4482]: I1125 07:11:02.248514 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" event={"ID":"61f162c1-bcc6-4098-86f3-7cff5790a2f3","Type":"ContainerDied","Data":"3fedc62076db9368642d2882fd4055597903be784417326620b567b4d622fa8d"} Nov 25 07:11:02 crc kubenswrapper[4482]: I1125 07:11:02.249619 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:02 crc kubenswrapper[4482]: I1125 07:11:02.249920 4482 status_manager.go:851] "Failed to get status for pod" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b7b9ccd57-7v896\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:02 crc kubenswrapper[4482]: I1125 07:11:02.250019 4482 scope.go:117] "RemoveContainer" containerID="3fedc62076db9368642d2882fd4055597903be784417326620b567b4d622fa8d" Nov 25 07:11:02 crc kubenswrapper[4482]: I1125 07:11:02.250214 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:02 crc kubenswrapper[4482]: E1125 07:11:02.877389 4482 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/glance-glance-default-external-api-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/glance-glance-default-external-api-0\": dial tcp 192.168.26.133:6443: connect: connection refused" pod="openstack/glance-default-external-api-0" volumeName="glance" Nov 25 07:11:03 crc kubenswrapper[4482]: I1125 07:11:03.257319 4482 generic.go:334] "Generic (PLEG): container finished" podID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" containerID="1bcd9e651d21937f0ca3f5692dad19f9e6429a5d0463edc80504b8c3a06f3f99" exitCode=1 Nov 25 07:11:03 crc kubenswrapper[4482]: I1125 07:11:03.257369 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" event={"ID":"61f162c1-bcc6-4098-86f3-7cff5790a2f3","Type":"ContainerDied","Data":"1bcd9e651d21937f0ca3f5692dad19f9e6429a5d0463edc80504b8c3a06f3f99"} Nov 25 07:11:03 crc kubenswrapper[4482]: I1125 07:11:03.257411 4482 scope.go:117] "RemoveContainer" containerID="3fedc62076db9368642d2882fd4055597903be784417326620b567b4d622fa8d" Nov 25 07:11:03 crc kubenswrapper[4482]: I1125 07:11:03.257893 4482 scope.go:117] "RemoveContainer" containerID="1bcd9e651d21937f0ca3f5692dad19f9e6429a5d0463edc80504b8c3a06f3f99" Nov 25 07:11:03 crc kubenswrapper[4482]: E1125 07:11:03.258122 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-6b7b9ccd57-7v896_metallb-system(61f162c1-bcc6-4098-86f3-7cff5790a2f3)\"" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" Nov 25 07:11:03 crc kubenswrapper[4482]: I1125 07:11:03.258229 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:03 crc kubenswrapper[4482]: I1125 07:11:03.259455 4482 status_manager.go:851] "Failed to get status for pod" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b7b9ccd57-7v896\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:03 crc kubenswrapper[4482]: I1125 07:11:03.259717 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:04 crc kubenswrapper[4482]: I1125 07:11:04.272820 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 07:11:04 crc kubenswrapper[4482]: I1125 07:11:04.272894 4482 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd" exitCode=1 Nov 25 07:11:04 crc kubenswrapper[4482]: I1125 07:11:04.272926 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd"} Nov 25 07:11:04 crc kubenswrapper[4482]: I1125 07:11:04.273460 4482 scope.go:117] "RemoveContainer" containerID="3d6d0e1cbfc56d8e0b1ef6b4a384461cb9c08686398f1a7cddb69eb8753106bd" Nov 25 07:11:04 crc kubenswrapper[4482]: I1125 07:11:04.274093 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:04 crc kubenswrapper[4482]: I1125 07:11:04.274534 4482 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:04 crc kubenswrapper[4482]: I1125 07:11:04.274880 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:04 crc kubenswrapper[4482]: I1125 07:11:04.275195 4482 status_manager.go:851] "Failed to get status for pod" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b7b9ccd57-7v896\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.290352 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.290817 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"06b19bf0c26a05dfc20e337212fe6ad52d3462dd7fa43e4d425390b1bd2f4ce0"} Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.292284 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.292788 4482 status_manager.go:851] "Failed to get status for pod" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b7b9ccd57-7v896\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.294961 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.295248 4482 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.838792 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.838889 4482 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.839723 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.840000 4482 status_manager.go:851] "Failed to get status for pod" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b7b9ccd57-7v896\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.840262 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.840915 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.841423 4482 status_manager.go:851] "Failed to get status for pod" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b7b9ccd57-7v896\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.841707 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.841927 4482 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.857304 4482 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6df3d28-c8f6-4460-b529-d5d1327f8e90" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.857336 4482 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6df3d28-c8f6-4460-b529-d5d1327f8e90" Nov 25 07:11:05 crc kubenswrapper[4482]: E1125 07:11:05.857673 4482 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.858093 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.978582 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.982638 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.983267 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.983547 4482 status_manager.go:851] "Failed to get status for pod" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b7b9ccd57-7v896\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.983811 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:05 crc kubenswrapper[4482]: I1125 07:11:05.984083 4482 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:06 crc kubenswrapper[4482]: I1125 07:11:06.300323 4482 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4a910080ab84270f6232da3efee161679e15afcf4f6b9890c72a64262006f012" exitCode=0 Nov 25 07:11:06 crc kubenswrapper[4482]: I1125 07:11:06.300476 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4a910080ab84270f6232da3efee161679e15afcf4f6b9890c72a64262006f012"} Nov 25 07:11:06 crc kubenswrapper[4482]: I1125 07:11:06.300525 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"63d4efeb749f859b460d60274501b03b25c23de3301932a70c8cc8d4bdeaeeb0"} Nov 25 07:11:06 crc kubenswrapper[4482]: I1125 07:11:06.300621 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:11:06 crc kubenswrapper[4482]: I1125 07:11:06.300776 4482 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6df3d28-c8f6-4460-b529-d5d1327f8e90" Nov 25 07:11:06 crc kubenswrapper[4482]: I1125 07:11:06.300793 4482 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6df3d28-c8f6-4460-b529-d5d1327f8e90" Nov 25 07:11:06 crc kubenswrapper[4482]: E1125 07:11:06.301255 4482 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:11:06 crc kubenswrapper[4482]: I1125 07:11:06.301881 4482 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:06 crc kubenswrapper[4482]: I1125 07:11:06.302248 4482 status_manager.go:851] "Failed to get status for pod" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6b7b9ccd57-7v896\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:06 crc kubenswrapper[4482]: I1125 07:11:06.302515 4482 status_manager.go:851] "Failed to get status for pod" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:06 crc kubenswrapper[4482]: I1125 07:11:06.302780 4482 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.26.133:6443: connect: connection refused" Nov 25 07:11:06 crc kubenswrapper[4482]: I1125 07:11:06.860508 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="1a79608b-f242-45d3-aa13-73c0d7bfd626" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 07:11:07 crc kubenswrapper[4482]: I1125 07:11:07.165635 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 07:11:07 crc kubenswrapper[4482]: I1125 07:11:07.166444 4482 scope.go:117] "RemoveContainer" containerID="1bcd9e651d21937f0ca3f5692dad19f9e6429a5d0463edc80504b8c3a06f3f99" Nov 25 07:11:07 crc kubenswrapper[4482]: E1125 07:11:07.166740 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-6b7b9ccd57-7v896_metallb-system(61f162c1-bcc6-4098-86f3-7cff5790a2f3)\"" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" Nov 25 07:11:07 crc kubenswrapper[4482]: I1125 07:11:07.322949 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"54223e0956b6c6e57162d44947ff7eb81d11a1cc51d6963f095961678767e08e"} Nov 25 07:11:07 crc kubenswrapper[4482]: I1125 07:11:07.322988 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cd4d7b789a22b846c45a29fd5f6a4c1224512c3c9ac7e8430eb20a70cbed60ec"} Nov 25 07:11:07 crc kubenswrapper[4482]: I1125 07:11:07.322999 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f9178a5418cc678a61e480e1e0cd9214ff6f26fce5dc9db7868145ec77e7e5c3"} Nov 25 07:11:07 crc kubenswrapper[4482]: I1125 07:11:07.323007 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"137443fc8d5d6f310cf74a3ce9dbea493f519cbe697c003a96983ead800997d2"} Nov 25 07:11:08 crc kubenswrapper[4482]: I1125 07:11:08.335122 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"40c10e33947fb085d35a7b3864a082016df8277a6f028ebdcfc0333c42fd7049"} Nov 25 07:11:08 crc kubenswrapper[4482]: I1125 07:11:08.335545 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:11:08 crc kubenswrapper[4482]: I1125 07:11:08.335397 4482 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6df3d28-c8f6-4460-b529-d5d1327f8e90" Nov 25 07:11:08 crc kubenswrapper[4482]: I1125 07:11:08.335580 4482 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6df3d28-c8f6-4460-b529-d5d1327f8e90" Nov 25 07:11:10 crc kubenswrapper[4482]: I1125 07:11:10.858771 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:11:10 crc kubenswrapper[4482]: I1125 07:11:10.859496 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:11:10 crc kubenswrapper[4482]: I1125 07:11:10.864358 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.383712 4482 generic.go:334] "Generic (PLEG): container finished" podID="a824a0e7-eb0a-4a5c-aafd-d01b622d6141" containerID="4e101f6c433b2c16cddd678061e9c1a85ce78919aa1a41e94e14d0a0bb311358" exitCode=1 Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.383812 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" event={"ID":"a824a0e7-eb0a-4a5c-aafd-d01b622d6141","Type":"ContainerDied","Data":"4e101f6c433b2c16cddd678061e9c1a85ce78919aa1a41e94e14d0a0bb311358"} Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.384814 4482 scope.go:117] "RemoveContainer" containerID="4e101f6c433b2c16cddd678061e9c1a85ce78919aa1a41e94e14d0a0bb311358" Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.391054 4482 generic.go:334] "Generic (PLEG): container finished" podID="9dbafcad-7706-4390-9745-238418d06f5c" containerID="0a5c174cc595bb4e16b69c8475a26cb7391b67e66693437f57fe83f6bfedb8bc" exitCode=1 Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.391121 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" event={"ID":"9dbafcad-7706-4390-9745-238418d06f5c","Type":"ContainerDied","Data":"0a5c174cc595bb4e16b69c8475a26cb7391b67e66693437f57fe83f6bfedb8bc"} Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.391528 4482 scope.go:117] "RemoveContainer" containerID="0a5c174cc595bb4e16b69c8475a26cb7391b67e66693437f57fe83f6bfedb8bc" Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.395120 4482 generic.go:334] "Generic (PLEG): container finished" podID="4d7476c3-dd4a-4e22-a018-e9a93d53ece5" containerID="d50aa3ab08ace20a4bf09a1674bed2a916200e3f3205e7a51885e230e64010bd" exitCode=1 Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.395187 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" event={"ID":"4d7476c3-dd4a-4e22-a018-e9a93d53ece5","Type":"ContainerDied","Data":"d50aa3ab08ace20a4bf09a1674bed2a916200e3f3205e7a51885e230e64010bd"} Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.395482 4482 scope.go:117] "RemoveContainer" containerID="d50aa3ab08ace20a4bf09a1674bed2a916200e3f3205e7a51885e230e64010bd" Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.402813 4482 generic.go:334] "Generic (PLEG): container finished" podID="4754fff5-c20f-42c5-8c10-bb9975919bf3" containerID="4b14250b497648f6feceb8b6e551b8c260869eec987ab73b40aa939fa27d792a" exitCode=1 Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.402877 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" event={"ID":"4754fff5-c20f-42c5-8c10-bb9975919bf3","Type":"ContainerDied","Data":"4b14250b497648f6feceb8b6e551b8c260869eec987ab73b40aa939fa27d792a"} Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.403446 4482 scope.go:117] "RemoveContainer" containerID="4b14250b497648f6feceb8b6e551b8c260869eec987ab73b40aa939fa27d792a" Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.413056 4482 generic.go:334] "Generic (PLEG): container finished" podID="2375b89e-398f-45d4-badc-1980cfcda4a1" containerID="1983925be4c5b314e70dfc5f4f37025f1a92be80e343a14b542879b9e83f4201" exitCode=1 Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.413094 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" event={"ID":"2375b89e-398f-45d4-badc-1980cfcda4a1","Type":"ContainerDied","Data":"1983925be4c5b314e70dfc5f4f37025f1a92be80e343a14b542879b9e83f4201"} Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.413565 4482 scope.go:117] "RemoveContainer" containerID="1983925be4c5b314e70dfc5f4f37025f1a92be80e343a14b542879b9e83f4201" Nov 25 07:11:12 crc kubenswrapper[4482]: I1125 07:11:12.718986 4482 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" podUID="ee690930-78a0-4f7d-be10-feee0cf523d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.354184 4482 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.420513 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" event={"ID":"a824a0e7-eb0a-4a5c-aafd-d01b622d6141","Type":"ContainerStarted","Data":"c743f86dc02cd8e60535a971be27693f7ea0befbeae1def2f2d5a983b140e197"} Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.420677 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.422589 4482 generic.go:334] "Generic (PLEG): container finished" podID="4754fff5-c20f-42c5-8c10-bb9975919bf3" containerID="85e8e95dd9824134268f11ce764c890d570df220b80cf2a24bf08412db33ec3c" exitCode=1 Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.422627 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" event={"ID":"4754fff5-c20f-42c5-8c10-bb9975919bf3","Type":"ContainerDied","Data":"85e8e95dd9824134268f11ce764c890d570df220b80cf2a24bf08412db33ec3c"} Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.422833 4482 scope.go:117] "RemoveContainer" containerID="4b14250b497648f6feceb8b6e551b8c260869eec987ab73b40aa939fa27d792a" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.423447 4482 scope.go:117] "RemoveContainer" containerID="85e8e95dd9824134268f11ce764c890d570df220b80cf2a24bf08412db33ec3c" Nov 25 07:11:13 crc kubenswrapper[4482]: E1125 07:11:13.423788 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-svglr_openstack-operators(4754fff5-c20f-42c5-8c10-bb9975919bf3)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" podUID="4754fff5-c20f-42c5-8c10-bb9975919bf3" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.424845 4482 generic.go:334] "Generic (PLEG): container finished" podID="3a5cd60b-13ff-44ea-b256-1e05d03912e4" containerID="c82a7988d2b2abdd3088d986e7adc8af613611ca65413ba16ae1870e69c10f8d" exitCode=1 Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.424900 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" event={"ID":"3a5cd60b-13ff-44ea-b256-1e05d03912e4","Type":"ContainerDied","Data":"c82a7988d2b2abdd3088d986e7adc8af613611ca65413ba16ae1870e69c10f8d"} Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.425450 4482 scope.go:117] "RemoveContainer" containerID="c82a7988d2b2abdd3088d986e7adc8af613611ca65413ba16ae1870e69c10f8d" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.437899 4482 generic.go:334] "Generic (PLEG): container finished" podID="ee690930-78a0-4f7d-be10-feee0cf523d7" containerID="87fd78bd424794592976dfa77489b3a1fffcd70265b55d2eaa5f2a5ce9cc5952" exitCode=1 Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.437974 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" event={"ID":"ee690930-78a0-4f7d-be10-feee0cf523d7","Type":"ContainerDied","Data":"87fd78bd424794592976dfa77489b3a1fffcd70265b55d2eaa5f2a5ce9cc5952"} Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.438845 4482 scope.go:117] "RemoveContainer" containerID="87fd78bd424794592976dfa77489b3a1fffcd70265b55d2eaa5f2a5ce9cc5952" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.441967 4482 generic.go:334] "Generic (PLEG): container finished" podID="a2dcdd81-a863-4453-b1b6-e1824d5444b6" containerID="26d0a2435aa24cf9bc994091d4da1326235c9369561b8fcccb3fac21087db6e7" exitCode=1 Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.442009 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" event={"ID":"a2dcdd81-a863-4453-b1b6-e1824d5444b6","Type":"ContainerDied","Data":"26d0a2435aa24cf9bc994091d4da1326235c9369561b8fcccb3fac21087db6e7"} Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.442333 4482 scope.go:117] "RemoveContainer" containerID="26d0a2435aa24cf9bc994091d4da1326235c9369561b8fcccb3fac21087db6e7" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.448013 4482 generic.go:334] "Generic (PLEG): container finished" podID="4ab40028-48ce-48f7-bbd4-97b1bed0cf4c" containerID="cbe48659e3b993bbd97a48d9f917a0cc45a4edf7f304b49298bda2beafe4bd9a" exitCode=1 Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.448067 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" event={"ID":"4ab40028-48ce-48f7-bbd4-97b1bed0cf4c","Type":"ContainerDied","Data":"cbe48659e3b993bbd97a48d9f917a0cc45a4edf7f304b49298bda2beafe4bd9a"} Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.448398 4482 scope.go:117] "RemoveContainer" containerID="cbe48659e3b993bbd97a48d9f917a0cc45a4edf7f304b49298bda2beafe4bd9a" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.453465 4482 generic.go:334] "Generic (PLEG): container finished" podID="2375b89e-398f-45d4-badc-1980cfcda4a1" containerID="1579a4ba95fe0a43f4aab511be88eea30393621c1e2db18af4bf76abe2e434a9" exitCode=1 Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.453515 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" event={"ID":"2375b89e-398f-45d4-badc-1980cfcda4a1","Type":"ContainerDied","Data":"1579a4ba95fe0a43f4aab511be88eea30393621c1e2db18af4bf76abe2e434a9"} Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.453809 4482 scope.go:117] "RemoveContainer" containerID="1579a4ba95fe0a43f4aab511be88eea30393621c1e2db18af4bf76abe2e434a9" Nov 25 07:11:13 crc kubenswrapper[4482]: E1125 07:11:13.454005 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-2qkzx_openstack-operators(2375b89e-398f-45d4-badc-1980cfcda4a1)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" podUID="2375b89e-398f-45d4-badc-1980cfcda4a1" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.455365 4482 generic.go:334] "Generic (PLEG): container finished" podID="337411b1-ff37-4370-ad36-415f816f5d07" containerID="d873bf5b8b26c1921674543f60f034820f1f6dd2f3e05fca296c94fabb08ac1f" exitCode=1 Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.455406 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" event={"ID":"337411b1-ff37-4370-ad36-415f816f5d07","Type":"ContainerDied","Data":"d873bf5b8b26c1921674543f60f034820f1f6dd2f3e05fca296c94fabb08ac1f"} Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.455641 4482 scope.go:117] "RemoveContainer" containerID="d873bf5b8b26c1921674543f60f034820f1f6dd2f3e05fca296c94fabb08ac1f" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.462662 4482 generic.go:334] "Generic (PLEG): container finished" podID="9dbafcad-7706-4390-9745-238418d06f5c" containerID="011637fc673149702bba91b2f72de5945df2d05318e9d6623d3edb14afe9c363" exitCode=1 Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.462709 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" event={"ID":"9dbafcad-7706-4390-9745-238418d06f5c","Type":"ContainerDied","Data":"011637fc673149702bba91b2f72de5945df2d05318e9d6623d3edb14afe9c363"} Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.463025 4482 scope.go:117] "RemoveContainer" containerID="011637fc673149702bba91b2f72de5945df2d05318e9d6623d3edb14afe9c363" Nov 25 07:11:13 crc kubenswrapper[4482]: E1125 07:11:13.463230 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-m5rfx_openstack-operators(9dbafcad-7706-4390-9745-238418d06f5c)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" podUID="9dbafcad-7706-4390-9745-238418d06f5c" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.502662 4482 generic.go:334] "Generic (PLEG): container finished" podID="4d7476c3-dd4a-4e22-a018-e9a93d53ece5" containerID="7270281d10e76dffc5e940fcc49cbc2e1cbe302b3d771fee1756f9e131c565f7" exitCode=1 Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.502724 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" event={"ID":"4d7476c3-dd4a-4e22-a018-e9a93d53ece5","Type":"ContainerDied","Data":"7270281d10e76dffc5e940fcc49cbc2e1cbe302b3d771fee1756f9e131c565f7"} Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.503257 4482 scope.go:117] "RemoveContainer" containerID="7270281d10e76dffc5e940fcc49cbc2e1cbe302b3d771fee1756f9e131c565f7" Nov 25 07:11:13 crc kubenswrapper[4482]: E1125 07:11:13.503477 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-jq46h_openstack-operators(4d7476c3-dd4a-4e22-a018-e9a93d53ece5)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" podUID="4d7476c3-dd4a-4e22-a018-e9a93d53ece5" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.515393 4482 generic.go:334] "Generic (PLEG): container finished" podID="3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a" containerID="b29a95a87a1294605d75d120af662060c297dc5140134bbe49d1c0429f58aad1" exitCode=1 Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.515446 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" event={"ID":"3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a","Type":"ContainerDied","Data":"b29a95a87a1294605d75d120af662060c297dc5140134bbe49d1c0429f58aad1"} Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.515991 4482 scope.go:117] "RemoveContainer" containerID="b29a95a87a1294605d75d120af662060c297dc5140134bbe49d1c0429f58aad1" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.519214 4482 generic.go:334] "Generic (PLEG): container finished" podID="1af05cb8-e059-49d7-91dc-17bfecaec8db" containerID="ecf3504ce636e98d632396fe10440e17216ee3f25454bdc21c806f6df0584169" exitCode=1 Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.519468 4482 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6df3d28-c8f6-4460-b529-d5d1327f8e90" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.519486 4482 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6df3d28-c8f6-4460-b529-d5d1327f8e90" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.519598 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" event={"ID":"1af05cb8-e059-49d7-91dc-17bfecaec8db","Type":"ContainerDied","Data":"ecf3504ce636e98d632396fe10440e17216ee3f25454bdc21c806f6df0584169"} Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.519879 4482 scope.go:117] "RemoveContainer" containerID="ecf3504ce636e98d632396fe10440e17216ee3f25454bdc21c806f6df0584169" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.524586 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.569235 4482 scope.go:117] "RemoveContainer" containerID="1983925be4c5b314e70dfc5f4f37025f1a92be80e343a14b542879b9e83f4201" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.697848 4482 scope.go:117] "RemoveContainer" containerID="0a5c174cc595bb4e16b69c8475a26cb7391b67e66693437f57fe83f6bfedb8bc" Nov 25 07:11:13 crc kubenswrapper[4482]: I1125 07:11:13.764933 4482 scope.go:117] "RemoveContainer" containerID="d50aa3ab08ace20a4bf09a1674bed2a916200e3f3205e7a51885e230e64010bd" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.528949 4482 generic.go:334] "Generic (PLEG): container finished" podID="3a5cd60b-13ff-44ea-b256-1e05d03912e4" containerID="f950c3b0a856af675ea32b1c79408d9b568dc2750931dbace2af882edf07ac76" exitCode=1 Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.529148 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" event={"ID":"3a5cd60b-13ff-44ea-b256-1e05d03912e4","Type":"ContainerDied","Data":"f950c3b0a856af675ea32b1c79408d9b568dc2750931dbace2af882edf07ac76"} Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.529366 4482 scope.go:117] "RemoveContainer" containerID="c82a7988d2b2abdd3088d986e7adc8af613611ca65413ba16ae1870e69c10f8d" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.530376 4482 scope.go:117] "RemoveContainer" containerID="f950c3b0a856af675ea32b1c79408d9b568dc2750931dbace2af882edf07ac76" Nov 25 07:11:14 crc kubenswrapper[4482]: E1125 07:11:14.530827 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-lx6v6_openstack-operators(3a5cd60b-13ff-44ea-b256-1e05d03912e4)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" podUID="3a5cd60b-13ff-44ea-b256-1e05d03912e4" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.531829 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" event={"ID":"ee690930-78a0-4f7d-be10-feee0cf523d7","Type":"ContainerStarted","Data":"4a4904f21f6d9ed0d21d31898d79a8dc94fb021baf8febe8b6d8cf88bed601f5"} Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.532211 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.536603 4482 generic.go:334] "Generic (PLEG): container finished" podID="1af05cb8-e059-49d7-91dc-17bfecaec8db" containerID="4a7f35af51469b0d9fc1fa263adda034c3cc72f7f48f1b03956121c0e4c1c809" exitCode=1 Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.536656 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" event={"ID":"1af05cb8-e059-49d7-91dc-17bfecaec8db","Type":"ContainerDied","Data":"4a7f35af51469b0d9fc1fa263adda034c3cc72f7f48f1b03956121c0e4c1c809"} Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.537250 4482 scope.go:117] "RemoveContainer" containerID="4a7f35af51469b0d9fc1fa263adda034c3cc72f7f48f1b03956121c0e4c1c809" Nov 25 07:11:14 crc kubenswrapper[4482]: E1125 07:11:14.537501 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-2cfdk_openstack-operators(1af05cb8-e059-49d7-91dc-17bfecaec8db)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" podUID="1af05cb8-e059-49d7-91dc-17bfecaec8db" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.537990 4482 generic.go:334] "Generic (PLEG): container finished" podID="4012508a-01a7-4e14-812e-7c70b350662a" containerID="4af88cb6b77bd9336f070a73a721621cb2ff8640147717b3b09bbfa9438605a4" exitCode=1 Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.538064 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" event={"ID":"4012508a-01a7-4e14-812e-7c70b350662a","Type":"ContainerDied","Data":"4af88cb6b77bd9336f070a73a721621cb2ff8640147717b3b09bbfa9438605a4"} Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.538349 4482 scope.go:117] "RemoveContainer" containerID="4af88cb6b77bd9336f070a73a721621cb2ff8640147717b3b09bbfa9438605a4" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.541425 4482 generic.go:334] "Generic (PLEG): container finished" podID="d0b2883e-6d53-465c-ba0c-45173ff59d4b" containerID="b0630f0752ed5e54a931a90a68391ae360781290937b624a85bc8c424cb1a609" exitCode=1 Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.541473 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" event={"ID":"d0b2883e-6d53-465c-ba0c-45173ff59d4b","Type":"ContainerDied","Data":"b0630f0752ed5e54a931a90a68391ae360781290937b624a85bc8c424cb1a609"} Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.542100 4482 scope.go:117] "RemoveContainer" containerID="b0630f0752ed5e54a931a90a68391ae360781290937b624a85bc8c424cb1a609" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.545450 4482 generic.go:334] "Generic (PLEG): container finished" podID="4ab40028-48ce-48f7-bbd4-97b1bed0cf4c" containerID="8df168e715b6281d9f995e3dac1568ffb63a30b76eaa7abe714c8d40389ec639" exitCode=1 Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.545505 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" event={"ID":"4ab40028-48ce-48f7-bbd4-97b1bed0cf4c","Type":"ContainerDied","Data":"8df168e715b6281d9f995e3dac1568ffb63a30b76eaa7abe714c8d40389ec639"} Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.545970 4482 scope.go:117] "RemoveContainer" containerID="8df168e715b6281d9f995e3dac1568ffb63a30b76eaa7abe714c8d40389ec639" Nov 25 07:11:14 crc kubenswrapper[4482]: E1125 07:11:14.546185 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-m7kcf_openstack-operators(4ab40028-48ce-48f7-bbd4-97b1bed0cf4c)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" podUID="4ab40028-48ce-48f7-bbd4-97b1bed0cf4c" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.546748 4482 generic.go:334] "Generic (PLEG): container finished" podID="004e08bd-55ee-4702-88b6-69bd67a32610" containerID="929c213764e37eb1414b74117ecfbebc19322c88247ec1bd95b57fc3cc5ebe94" exitCode=1 Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.546801 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" event={"ID":"004e08bd-55ee-4702-88b6-69bd67a32610","Type":"ContainerDied","Data":"929c213764e37eb1414b74117ecfbebc19322c88247ec1bd95b57fc3cc5ebe94"} Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.547089 4482 scope.go:117] "RemoveContainer" containerID="929c213764e37eb1414b74117ecfbebc19322c88247ec1bd95b57fc3cc5ebe94" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.548717 4482 generic.go:334] "Generic (PLEG): container finished" podID="20c9d02f-1cbc-4c66-84ff-7cbf40bac507" containerID="64d463c2b03fccbe1f4b60e451e6c389fa29df6ca6188bfcf716aec91055ee23" exitCode=1 Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.548760 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" event={"ID":"20c9d02f-1cbc-4c66-84ff-7cbf40bac507","Type":"ContainerDied","Data":"64d463c2b03fccbe1f4b60e451e6c389fa29df6ca6188bfcf716aec91055ee23"} Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.549160 4482 scope.go:117] "RemoveContainer" containerID="64d463c2b03fccbe1f4b60e451e6c389fa29df6ca6188bfcf716aec91055ee23" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.550689 4482 generic.go:334] "Generic (PLEG): container finished" podID="3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a" containerID="6dfbffe49d3a24d9953151df7822d569293d8b4439f6467cf59164ec26460279" exitCode=1 Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.550731 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" event={"ID":"3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a","Type":"ContainerDied","Data":"6dfbffe49d3a24d9953151df7822d569293d8b4439f6467cf59164ec26460279"} Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.551008 4482 scope.go:117] "RemoveContainer" containerID="6dfbffe49d3a24d9953151df7822d569293d8b4439f6467cf59164ec26460279" Nov 25 07:11:14 crc kubenswrapper[4482]: E1125 07:11:14.551214 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-k8drr_openstack-operators(3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" podUID="3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.557020 4482 generic.go:334] "Generic (PLEG): container finished" podID="a2dcdd81-a863-4453-b1b6-e1824d5444b6" containerID="4aed535fa2d5ffa019e1b430155d9e22df28556b81459a1bd72596ea0f9e8e4d" exitCode=1 Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.557047 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" event={"ID":"a2dcdd81-a863-4453-b1b6-e1824d5444b6","Type":"ContainerDied","Data":"4aed535fa2d5ffa019e1b430155d9e22df28556b81459a1bd72596ea0f9e8e4d"} Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.557688 4482 scope.go:117] "RemoveContainer" containerID="4aed535fa2d5ffa019e1b430155d9e22df28556b81459a1bd72596ea0f9e8e4d" Nov 25 07:11:14 crc kubenswrapper[4482]: E1125 07:11:14.557878 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-t4dwf_openstack-operators(a2dcdd81-a863-4453-b1b6-e1824d5444b6)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" podUID="a2dcdd81-a863-4453-b1b6-e1824d5444b6" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.566830 4482 scope.go:117] "RemoveContainer" containerID="ecf3504ce636e98d632396fe10440e17216ee3f25454bdc21c806f6df0584169" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.610860 4482 generic.go:334] "Generic (PLEG): container finished" podID="4a4c6e25-e4fb-49b7-b757-e82e153fdb24" containerID="763e7585469d4ac62b9482478155c10876dc0d5a7f06d910aa018a0b2b63bd3a" exitCode=1 Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.610922 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" event={"ID":"4a4c6e25-e4fb-49b7-b757-e82e153fdb24","Type":"ContainerDied","Data":"763e7585469d4ac62b9482478155c10876dc0d5a7f06d910aa018a0b2b63bd3a"} Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.611399 4482 scope.go:117] "RemoveContainer" containerID="763e7585469d4ac62b9482478155c10876dc0d5a7f06d910aa018a0b2b63bd3a" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.616991 4482 generic.go:334] "Generic (PLEG): container finished" podID="337411b1-ff37-4370-ad36-415f816f5d07" containerID="5f13a595a8fde1fca1f51a078a64b6b6ddb8bfde20e101320ce9b3285132e575" exitCode=1 Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.617066 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" event={"ID":"337411b1-ff37-4370-ad36-415f816f5d07","Type":"ContainerDied","Data":"5f13a595a8fde1fca1f51a078a64b6b6ddb8bfde20e101320ce9b3285132e575"} Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.617377 4482 scope.go:117] "RemoveContainer" containerID="5f13a595a8fde1fca1f51a078a64b6b6ddb8bfde20e101320ce9b3285132e575" Nov 25 07:11:14 crc kubenswrapper[4482]: E1125 07:11:14.617548 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-4mr9n_openstack-operators(337411b1-ff37-4370-ad36-415f816f5d07)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" podUID="337411b1-ff37-4370-ad36-415f816f5d07" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.617880 4482 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6df3d28-c8f6-4460-b529-d5d1327f8e90" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.617898 4482 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6df3d28-c8f6-4460-b529-d5d1327f8e90" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.767766 4482 scope.go:117] "RemoveContainer" containerID="cbe48659e3b993bbd97a48d9f917a0cc45a4edf7f304b49298bda2beafe4bd9a" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.809341 4482 scope.go:117] "RemoveContainer" containerID="b29a95a87a1294605d75d120af662060c297dc5140134bbe49d1c0429f58aad1" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.839827 4482 scope.go:117] "RemoveContainer" containerID="26d0a2435aa24cf9bc994091d4da1326235c9369561b8fcccb3fac21087db6e7" Nov 25 07:11:14 crc kubenswrapper[4482]: I1125 07:11:14.886553 4482 scope.go:117] "RemoveContainer" containerID="d873bf5b8b26c1921674543f60f034820f1f6dd2f3e05fca296c94fabb08ac1f" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.633865 4482 generic.go:334] "Generic (PLEG): container finished" podID="4be124a3-1fa2-455c-834f-01e66fc326b3" containerID="f162cf30d632ace23a8c2ddb8a5c8df06ab3e974b8684d75d48f6ff873b63bf2" exitCode=1 Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.633968 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" event={"ID":"4be124a3-1fa2-455c-834f-01e66fc326b3","Type":"ContainerDied","Data":"f162cf30d632ace23a8c2ddb8a5c8df06ab3e974b8684d75d48f6ff873b63bf2"} Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.634743 4482 scope.go:117] "RemoveContainer" containerID="f162cf30d632ace23a8c2ddb8a5c8df06ab3e974b8684d75d48f6ff873b63bf2" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.638801 4482 generic.go:334] "Generic (PLEG): container finished" podID="004e08bd-55ee-4702-88b6-69bd67a32610" containerID="91a7b825b34d9ff52731f71622c12b5ee57409d2d673c64c618786df6baecf54" exitCode=1 Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.638858 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" event={"ID":"004e08bd-55ee-4702-88b6-69bd67a32610","Type":"ContainerDied","Data":"91a7b825b34d9ff52731f71622c12b5ee57409d2d673c64c618786df6baecf54"} Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.638897 4482 scope.go:117] "RemoveContainer" containerID="929c213764e37eb1414b74117ecfbebc19322c88247ec1bd95b57fc3cc5ebe94" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.639262 4482 scope.go:117] "RemoveContainer" containerID="91a7b825b34d9ff52731f71622c12b5ee57409d2d673c64c618786df6baecf54" Nov 25 07:11:15 crc kubenswrapper[4482]: E1125 07:11:15.639472 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-7cd5954d9-kmdnq_openstack-operators(004e08bd-55ee-4702-88b6-69bd67a32610)\"" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" podUID="004e08bd-55ee-4702-88b6-69bd67a32610" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.643696 4482 generic.go:334] "Generic (PLEG): container finished" podID="4a627cd2-d42b-4958-a41c-230dd8246061" containerID="1dc9279bf9e79ba53fdb68995812068144fddfb6804664ee56c37679d5889bc4" exitCode=1 Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.643780 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" event={"ID":"4a627cd2-d42b-4958-a41c-230dd8246061","Type":"ContainerDied","Data":"1dc9279bf9e79ba53fdb68995812068144fddfb6804664ee56c37679d5889bc4"} Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.644268 4482 scope.go:117] "RemoveContainer" containerID="1dc9279bf9e79ba53fdb68995812068144fddfb6804664ee56c37679d5889bc4" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.645955 4482 generic.go:334] "Generic (PLEG): container finished" podID="6ad00506-e452-4f9e-91d3-24b4da4a7104" containerID="b73b93dbf5efb76f2a5e9d0ad1289405716e666053d780a636ee9e55dd2ad5d6" exitCode=1 Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.646002 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" event={"ID":"6ad00506-e452-4f9e-91d3-24b4da4a7104","Type":"ContainerDied","Data":"b73b93dbf5efb76f2a5e9d0ad1289405716e666053d780a636ee9e55dd2ad5d6"} Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.646318 4482 scope.go:117] "RemoveContainer" containerID="b73b93dbf5efb76f2a5e9d0ad1289405716e666053d780a636ee9e55dd2ad5d6" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.649157 4482 generic.go:334] "Generic (PLEG): container finished" podID="7059a6d7-9dca-499a-9110-e8dafb53935b" containerID="96776e53576c5690dfe87d539bcd9673af78b98ad30ee2b3622919d65dd241d8" exitCode=1 Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.649223 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" event={"ID":"7059a6d7-9dca-499a-9110-e8dafb53935b","Type":"ContainerDied","Data":"96776e53576c5690dfe87d539bcd9673af78b98ad30ee2b3622919d65dd241d8"} Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.649503 4482 scope.go:117] "RemoveContainer" containerID="96776e53576c5690dfe87d539bcd9673af78b98ad30ee2b3622919d65dd241d8" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.662227 4482 generic.go:334] "Generic (PLEG): container finished" podID="f3eb6724-3ab3-4027-b8e6-3d90c403f13a" containerID="e613add272e2b07f21b51e7bfb49ea451dad3418f2af376b2e293c77c216eec9" exitCode=1 Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.662277 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" event={"ID":"f3eb6724-3ab3-4027-b8e6-3d90c403f13a","Type":"ContainerDied","Data":"e613add272e2b07f21b51e7bfb49ea451dad3418f2af376b2e293c77c216eec9"} Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.662623 4482 scope.go:117] "RemoveContainer" containerID="e613add272e2b07f21b51e7bfb49ea451dad3418f2af376b2e293c77c216eec9" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.684926 4482 generic.go:334] "Generic (PLEG): container finished" podID="4a4c6e25-e4fb-49b7-b757-e82e153fdb24" containerID="78eae792262bd006b19f6db44d464dd88ae675bd36dd142e227d8a2b3f5c2088" exitCode=1 Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.684987 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" event={"ID":"4a4c6e25-e4fb-49b7-b757-e82e153fdb24","Type":"ContainerDied","Data":"78eae792262bd006b19f6db44d464dd88ae675bd36dd142e227d8a2b3f5c2088"} Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.685383 4482 scope.go:117] "RemoveContainer" containerID="78eae792262bd006b19f6db44d464dd88ae675bd36dd142e227d8a2b3f5c2088" Nov 25 07:11:15 crc kubenswrapper[4482]: E1125 07:11:15.685603 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-8ttss_openstack-operators(4a4c6e25-e4fb-49b7-b757-e82e153fdb24)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" podUID="4a4c6e25-e4fb-49b7-b757-e82e153fdb24" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.690518 4482 generic.go:334] "Generic (PLEG): container finished" podID="4012508a-01a7-4e14-812e-7c70b350662a" containerID="dea64f64df8c27f6f18ca39acd09b059cce5960e38dff65139934714bce1a004" exitCode=1 Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.690561 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" event={"ID":"4012508a-01a7-4e14-812e-7c70b350662a","Type":"ContainerDied","Data":"dea64f64df8c27f6f18ca39acd09b059cce5960e38dff65139934714bce1a004"} Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.690833 4482 scope.go:117] "RemoveContainer" containerID="dea64f64df8c27f6f18ca39acd09b059cce5960e38dff65139934714bce1a004" Nov 25 07:11:15 crc kubenswrapper[4482]: E1125 07:11:15.693369 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-pv5cc_openstack-operators(4012508a-01a7-4e14-812e-7c70b350662a)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" podUID="4012508a-01a7-4e14-812e-7c70b350662a" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.704510 4482 generic.go:334] "Generic (PLEG): container finished" podID="3ec6220d-a590-404d-a427-98b94a3910c8" containerID="91b6c9a970b394c7002c806e5d03f8310112460dd8aaa192b5f661ac0e531499" exitCode=1 Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.704619 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" event={"ID":"3ec6220d-a590-404d-a427-98b94a3910c8","Type":"ContainerDied","Data":"91b6c9a970b394c7002c806e5d03f8310112460dd8aaa192b5f661ac0e531499"} Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.707696 4482 scope.go:117] "RemoveContainer" containerID="91b6c9a970b394c7002c806e5d03f8310112460dd8aaa192b5f661ac0e531499" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.721462 4482 generic.go:334] "Generic (PLEG): container finished" podID="d0b2883e-6d53-465c-ba0c-45173ff59d4b" containerID="4af07a23b623531b729d59ed36f26a3e18a0afe0cb88163f227964398415d548" exitCode=1 Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.721516 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" event={"ID":"d0b2883e-6d53-465c-ba0c-45173ff59d4b","Type":"ContainerDied","Data":"4af07a23b623531b729d59ed36f26a3e18a0afe0cb88163f227964398415d548"} Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.722012 4482 scope.go:117] "RemoveContainer" containerID="4af07a23b623531b729d59ed36f26a3e18a0afe0cb88163f227964398415d548" Nov 25 07:11:15 crc kubenswrapper[4482]: E1125 07:11:15.722397 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-tzkbq_openstack-operators(d0b2883e-6d53-465c-ba0c-45173ff59d4b)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" podUID="d0b2883e-6d53-465c-ba0c-45173ff59d4b" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.725458 4482 generic.go:334] "Generic (PLEG): container finished" podID="20c9d02f-1cbc-4c66-84ff-7cbf40bac507" containerID="0c25c997d297debae124c39c6aa0dfd5090e23447e68f6f16d9eb522386acffb" exitCode=1 Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.725502 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" event={"ID":"20c9d02f-1cbc-4c66-84ff-7cbf40bac507","Type":"ContainerDied","Data":"0c25c997d297debae124c39c6aa0dfd5090e23447e68f6f16d9eb522386acffb"} Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.725794 4482 scope.go:117] "RemoveContainer" containerID="0c25c997d297debae124c39c6aa0dfd5090e23447e68f6f16d9eb522386acffb" Nov 25 07:11:15 crc kubenswrapper[4482]: E1125 07:11:15.726012 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-r6cc4_openstack-operators(20c9d02f-1cbc-4c66-84ff-7cbf40bac507)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" podUID="20c9d02f-1cbc-4c66-84ff-7cbf40bac507" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.733058 4482 generic.go:334] "Generic (PLEG): container finished" podID="42e69f15-3b24-4d83-840e-3633c1bb87a3" containerID="51df49ea9df8ffe25ee067829772be2a565f083b8a71ffcdee985d7f6216e156" exitCode=1 Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.733949 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" event={"ID":"42e69f15-3b24-4d83-840e-3633c1bb87a3","Type":"ContainerDied","Data":"51df49ea9df8ffe25ee067829772be2a565f083b8a71ffcdee985d7f6216e156"} Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.734261 4482 scope.go:117] "RemoveContainer" containerID="51df49ea9df8ffe25ee067829772be2a565f083b8a71ffcdee985d7f6216e156" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.820526 4482 scope.go:117] "RemoveContainer" containerID="763e7585469d4ac62b9482478155c10876dc0d5a7f06d910aa018a0b2b63bd3a" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.877210 4482 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="16be9316-f3b6-4f81-853e-46b47f502a1f" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.890612 4482 scope.go:117] "RemoveContainer" containerID="4af88cb6b77bd9336f070a73a721621cb2ff8640147717b3b09bbfa9438605a4" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.927003 4482 scope.go:117] "RemoveContainer" containerID="b0630f0752ed5e54a931a90a68391ae360781290937b624a85bc8c424cb1a609" Nov 25 07:11:15 crc kubenswrapper[4482]: I1125 07:11:15.962336 4482 scope.go:117] "RemoveContainer" containerID="64d463c2b03fccbe1f4b60e451e6c389fa29df6ca6188bfcf716aec91055ee23" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.744368 4482 generic.go:334] "Generic (PLEG): container finished" podID="42e69f15-3b24-4d83-840e-3633c1bb87a3" containerID="a2b4406fa2533f16687a31f8e25453646f560f66faf32691e66b91eea2863d4a" exitCode=1 Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.744475 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" event={"ID":"42e69f15-3b24-4d83-840e-3633c1bb87a3","Type":"ContainerDied","Data":"a2b4406fa2533f16687a31f8e25453646f560f66faf32691e66b91eea2863d4a"} Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.744747 4482 scope.go:117] "RemoveContainer" containerID="51df49ea9df8ffe25ee067829772be2a565f083b8a71ffcdee985d7f6216e156" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.745806 4482 scope.go:117] "RemoveContainer" containerID="a2b4406fa2533f16687a31f8e25453646f560f66faf32691e66b91eea2863d4a" Nov 25 07:11:16 crc kubenswrapper[4482]: E1125 07:11:16.746244 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-xtvvg_openstack-operators(42e69f15-3b24-4d83-840e-3633c1bb87a3)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" podUID="42e69f15-3b24-4d83-840e-3633c1bb87a3" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.750332 4482 generic.go:334] "Generic (PLEG): container finished" podID="4be124a3-1fa2-455c-834f-01e66fc326b3" containerID="0fd8bab38f28284175690b986e5ea11137f63049e9e0a611702f43dd5535a79a" exitCode=1 Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.750400 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" event={"ID":"4be124a3-1fa2-455c-834f-01e66fc326b3","Type":"ContainerDied","Data":"0fd8bab38f28284175690b986e5ea11137f63049e9e0a611702f43dd5535a79a"} Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.751124 4482 scope.go:117] "RemoveContainer" containerID="0fd8bab38f28284175690b986e5ea11137f63049e9e0a611702f43dd5535a79a" Nov 25 07:11:16 crc kubenswrapper[4482]: E1125 07:11:16.751398 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-zdvcm_openstack-operators(4be124a3-1fa2-455c-834f-01e66fc326b3)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" podUID="4be124a3-1fa2-455c-834f-01e66fc326b3" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.755544 4482 generic.go:334] "Generic (PLEG): container finished" podID="4a627cd2-d42b-4958-a41c-230dd8246061" containerID="981bb162a01daf8aac98284b15a4584e21034cb1fb3ba17f0893fc6c50b0f5dc" exitCode=1 Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.755572 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" event={"ID":"4a627cd2-d42b-4958-a41c-230dd8246061","Type":"ContainerDied","Data":"981bb162a01daf8aac98284b15a4584e21034cb1fb3ba17f0893fc6c50b0f5dc"} Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.756208 4482 scope.go:117] "RemoveContainer" containerID="981bb162a01daf8aac98284b15a4584e21034cb1fb3ba17f0893fc6c50b0f5dc" Nov 25 07:11:16 crc kubenswrapper[4482]: E1125 07:11:16.756466 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-5zxlt_openstack-operators(4a627cd2-d42b-4958-a41c-230dd8246061)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" podUID="4a627cd2-d42b-4958-a41c-230dd8246061" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.768072 4482 generic.go:334] "Generic (PLEG): container finished" podID="3ec6220d-a590-404d-a427-98b94a3910c8" containerID="cc4ecf1453c70746dc1deb1d49326fadc6ec828989acc82b10c7a874225f7a03" exitCode=1 Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.768129 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" event={"ID":"3ec6220d-a590-404d-a427-98b94a3910c8","Type":"ContainerDied","Data":"cc4ecf1453c70746dc1deb1d49326fadc6ec828989acc82b10c7a874225f7a03"} Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.769274 4482 scope.go:117] "RemoveContainer" containerID="cc4ecf1453c70746dc1deb1d49326fadc6ec828989acc82b10c7a874225f7a03" Nov 25 07:11:16 crc kubenswrapper[4482]: E1125 07:11:16.769502 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-5pr4g_openstack-operators(3ec6220d-a590-404d-a427-98b94a3910c8)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" podUID="3ec6220d-a590-404d-a427-98b94a3910c8" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.774114 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" event={"ID":"7059a6d7-9dca-499a-9110-e8dafb53935b","Type":"ContainerStarted","Data":"812484d6de7afadf5bd8b61029b38d43c0934cb04a4ccdc00ecaa90c90430b55"} Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.774650 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.778214 4482 generic.go:334] "Generic (PLEG): container finished" podID="f3eb6724-3ab3-4027-b8e6-3d90c403f13a" containerID="4544e88eab057900acfa260ed8c505bdb7ca006efb9b2931c6a377490ac80fc8" exitCode=1 Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.778280 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" event={"ID":"f3eb6724-3ab3-4027-b8e6-3d90c403f13a","Type":"ContainerDied","Data":"4544e88eab057900acfa260ed8c505bdb7ca006efb9b2931c6a377490ac80fc8"} Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.778597 4482 scope.go:117] "RemoveContainer" containerID="4544e88eab057900acfa260ed8c505bdb7ca006efb9b2931c6a377490ac80fc8" Nov 25 07:11:16 crc kubenswrapper[4482]: E1125 07:11:16.778806 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-t6mdk_openstack-operators(f3eb6724-3ab3-4027-b8e6-3d90c403f13a)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" podUID="f3eb6724-3ab3-4027-b8e6-3d90c403f13a" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.786045 4482 generic.go:334] "Generic (PLEG): container finished" podID="6ad00506-e452-4f9e-91d3-24b4da4a7104" containerID="75a9f4f34a9acb0958436b585d2b3314cc10a57ea10296fd39dad9843c25bd20" exitCode=1 Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.786084 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" event={"ID":"6ad00506-e452-4f9e-91d3-24b4da4a7104","Type":"ContainerDied","Data":"75a9f4f34a9acb0958436b585d2b3314cc10a57ea10296fd39dad9843c25bd20"} Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.786585 4482 scope.go:117] "RemoveContainer" containerID="75a9f4f34a9acb0958436b585d2b3314cc10a57ea10296fd39dad9843c25bd20" Nov 25 07:11:16 crc kubenswrapper[4482]: E1125 07:11:16.786809 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-2x9vp_openstack-operators(6ad00506-e452-4f9e-91d3-24b4da4a7104)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" podUID="6ad00506-e452-4f9e-91d3-24b4da4a7104" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.814728 4482 scope.go:117] "RemoveContainer" containerID="f162cf30d632ace23a8c2ddb8a5c8df06ab3e974b8684d75d48f6ff873b63bf2" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.880954 4482 scope.go:117] "RemoveContainer" containerID="1dc9279bf9e79ba53fdb68995812068144fddfb6804664ee56c37679d5889bc4" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.908032 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="1a79608b-f242-45d3-aa13-73c0d7bfd626" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.908103 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.909058 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.909102 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1a79608b-f242-45d3-aa13-73c0d7bfd626" containerName="kube-state-metrics" containerID="cri-o://5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9" gracePeriod=30 Nov 25 07:11:16 crc kubenswrapper[4482]: I1125 07:11:16.935255 4482 scope.go:117] "RemoveContainer" containerID="91b6c9a970b394c7002c806e5d03f8310112460dd8aaa192b5f661ac0e531499" Nov 25 07:11:17 crc kubenswrapper[4482]: I1125 07:11:17.023543 4482 scope.go:117] "RemoveContainer" containerID="e613add272e2b07f21b51e7bfb49ea451dad3418f2af376b2e293c77c216eec9" Nov 25 07:11:17 crc kubenswrapper[4482]: I1125 07:11:17.047574 4482 scope.go:117] "RemoveContainer" containerID="b73b93dbf5efb76f2a5e9d0ad1289405716e666053d780a636ee9e55dd2ad5d6" Nov 25 07:11:17 crc kubenswrapper[4482]: I1125 07:11:17.796181 4482 generic.go:334] "Generic (PLEG): container finished" podID="1a79608b-f242-45d3-aa13-73c0d7bfd626" containerID="5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9" exitCode=2 Nov 25 07:11:17 crc kubenswrapper[4482]: I1125 07:11:17.796459 4482 generic.go:334] "Generic (PLEG): container finished" podID="1a79608b-f242-45d3-aa13-73c0d7bfd626" containerID="cbf861b27de742c2709534ac9a92b8d08e3c7b065fe346b8f2a64751c16df30a" exitCode=1 Nov 25 07:11:17 crc kubenswrapper[4482]: I1125 07:11:17.796364 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1a79608b-f242-45d3-aa13-73c0d7bfd626","Type":"ContainerDied","Data":"5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9"} Nov 25 07:11:17 crc kubenswrapper[4482]: I1125 07:11:17.796527 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1a79608b-f242-45d3-aa13-73c0d7bfd626","Type":"ContainerDied","Data":"cbf861b27de742c2709534ac9a92b8d08e3c7b065fe346b8f2a64751c16df30a"} Nov 25 07:11:17 crc kubenswrapper[4482]: I1125 07:11:17.796550 4482 scope.go:117] "RemoveContainer" containerID="5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9" Nov 25 07:11:17 crc kubenswrapper[4482]: I1125 07:11:17.797401 4482 scope.go:117] "RemoveContainer" containerID="cbf861b27de742c2709534ac9a92b8d08e3c7b065fe346b8f2a64751c16df30a" Nov 25 07:11:17 crc kubenswrapper[4482]: I1125 07:11:17.822339 4482 scope.go:117] "RemoveContainer" containerID="5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9" Nov 25 07:11:17 crc kubenswrapper[4482]: E1125 07:11:17.822622 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9\": container with ID starting with 5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9 not found: ID does not exist" containerID="5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9" Nov 25 07:11:17 crc kubenswrapper[4482]: I1125 07:11:17.822678 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9"} err="failed to get container status \"5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9\": rpc error: code = NotFound desc = could not find container \"5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9\": container with ID starting with 5be5d865f072ec969437ed151fd511229e0bba3b03ed990c076ae97b6b2885b9 not found: ID does not exist" Nov 25 07:11:17 crc kubenswrapper[4482]: I1125 07:11:17.831316 4482 scope.go:117] "RemoveContainer" containerID="1bcd9e651d21937f0ca3f5692dad19f9e6429a5d0463edc80504b8c3a06f3f99" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.601992 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.602335 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.602788 4482 scope.go:117] "RemoveContainer" containerID="85e8e95dd9824134268f11ce764c890d570df220b80cf2a24bf08412db33ec3c" Nov 25 07:11:18 crc kubenswrapper[4482]: E1125 07:11:18.603020 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-svglr_openstack-operators(4754fff5-c20f-42c5-8c10-bb9975919bf3)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" podUID="4754fff5-c20f-42c5-8c10-bb9975919bf3" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.616909 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.616939 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.617484 4482 scope.go:117] "RemoveContainer" containerID="0c25c997d297debae124c39c6aa0dfd5090e23447e68f6f16d9eb522386acffb" Nov 25 07:11:18 crc kubenswrapper[4482]: E1125 07:11:18.617694 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-r6cc4_openstack-operators(20c9d02f-1cbc-4c66-84ff-7cbf40bac507)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" podUID="20c9d02f-1cbc-4c66-84ff-7cbf40bac507" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.635581 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.635634 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.636409 4482 scope.go:117] "RemoveContainer" containerID="4aed535fa2d5ffa019e1b430155d9e22df28556b81459a1bd72596ea0f9e8e4d" Nov 25 07:11:18 crc kubenswrapper[4482]: E1125 07:11:18.636650 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-t4dwf_openstack-operators(a2dcdd81-a863-4453-b1b6-e1824d5444b6)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" podUID="a2dcdd81-a863-4453-b1b6-e1824d5444b6" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.730778 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.730815 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.731431 4482 scope.go:117] "RemoveContainer" containerID="1579a4ba95fe0a43f4aab511be88eea30393621c1e2db18af4bf76abe2e434a9" Nov 25 07:11:18 crc kubenswrapper[4482]: E1125 07:11:18.731649 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-2qkzx_openstack-operators(2375b89e-398f-45d4-badc-1980cfcda4a1)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" podUID="2375b89e-398f-45d4-badc-1980cfcda4a1" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.752270 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.752319 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.753242 4482 scope.go:117] "RemoveContainer" containerID="4544e88eab057900acfa260ed8c505bdb7ca006efb9b2931c6a377490ac80fc8" Nov 25 07:11:18 crc kubenswrapper[4482]: E1125 07:11:18.753489 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-t6mdk_openstack-operators(f3eb6724-3ab3-4027-b8e6-3d90c403f13a)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" podUID="f3eb6724-3ab3-4027-b8e6-3d90c403f13a" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.798771 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.798813 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.799147 4482 scope.go:117] "RemoveContainer" containerID="4af07a23b623531b729d59ed36f26a3e18a0afe0cb88163f227964398415d548" Nov 25 07:11:18 crc kubenswrapper[4482]: E1125 07:11:18.799383 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-tzkbq_openstack-operators(d0b2883e-6d53-465c-ba0c-45173ff59d4b)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" podUID="d0b2883e-6d53-465c-ba0c-45173ff59d4b" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.831267 4482 generic.go:334] "Generic (PLEG): container finished" podID="1a79608b-f242-45d3-aa13-73c0d7bfd626" containerID="86add79ccfa7d6add3237e5ffd6cdd4a5cb0b4fd61fee29f78bc4656aee57be1" exitCode=1 Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.831429 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1a79608b-f242-45d3-aa13-73c0d7bfd626","Type":"ContainerDied","Data":"86add79ccfa7d6add3237e5ffd6cdd4a5cb0b4fd61fee29f78bc4656aee57be1"} Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.831526 4482 scope.go:117] "RemoveContainer" containerID="cbf861b27de742c2709534ac9a92b8d08e3c7b065fe346b8f2a64751c16df30a" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.832869 4482 scope.go:117] "RemoveContainer" containerID="86add79ccfa7d6add3237e5ffd6cdd4a5cb0b4fd61fee29f78bc4656aee57be1" Nov 25 07:11:18 crc kubenswrapper[4482]: E1125 07:11:18.833316 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(1a79608b-f242-45d3-aa13-73c0d7bfd626)\"" pod="openstack/kube-state-metrics-0" podUID="1a79608b-f242-45d3-aa13-73c0d7bfd626" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.837458 4482 generic.go:334] "Generic (PLEG): container finished" podID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" containerID="ea803944fe17974d564d811e0e51fb8c7b8465011e56e6aaff8e90c5536a9cf1" exitCode=1 Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.837535 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" event={"ID":"61f162c1-bcc6-4098-86f3-7cff5790a2f3","Type":"ContainerDied","Data":"ea803944fe17974d564d811e0e51fb8c7b8465011e56e6aaff8e90c5536a9cf1"} Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.838099 4482 scope.go:117] "RemoveContainer" containerID="ea803944fe17974d564d811e0e51fb8c7b8465011e56e6aaff8e90c5536a9cf1" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.838351 4482 scope.go:117] "RemoveContainer" containerID="4544e88eab057900acfa260ed8c505bdb7ca006efb9b2931c6a377490ac80fc8" Nov 25 07:11:18 crc kubenswrapper[4482]: E1125 07:11:18.838554 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-6b7b9ccd57-7v896_metallb-system(61f162c1-bcc6-4098-86f3-7cff5790a2f3)\"" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" Nov 25 07:11:18 crc kubenswrapper[4482]: E1125 07:11:18.838589 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-t6mdk_openstack-operators(f3eb6724-3ab3-4027-b8e6-3d90c403f13a)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" podUID="f3eb6724-3ab3-4027-b8e6-3d90c403f13a" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.884710 4482 scope.go:117] "RemoveContainer" containerID="1bcd9e651d21937f0ca3f5692dad19f9e6429a5d0463edc80504b8c3a06f3f99" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.896568 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.896609 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.897150 4482 scope.go:117] "RemoveContainer" containerID="cc4ecf1453c70746dc1deb1d49326fadc6ec828989acc82b10c7a874225f7a03" Nov 25 07:11:18 crc kubenswrapper[4482]: E1125 07:11:18.897462 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-5pr4g_openstack-operators(3ec6220d-a590-404d-a427-98b94a3910c8)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" podUID="3ec6220d-a590-404d-a427-98b94a3910c8" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.970306 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.970425 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" Nov 25 07:11:18 crc kubenswrapper[4482]: I1125 07:11:18.972072 4482 scope.go:117] "RemoveContainer" containerID="011637fc673149702bba91b2f72de5945df2d05318e9d6623d3edb14afe9c363" Nov 25 07:11:18 crc kubenswrapper[4482]: E1125 07:11:18.972339 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-m5rfx_openstack-operators(9dbafcad-7706-4390-9745-238418d06f5c)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" podUID="9dbafcad-7706-4390-9745-238418d06f5c" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.001642 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.001703 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.002093 4482 scope.go:117] "RemoveContainer" containerID="78eae792262bd006b19f6db44d464dd88ae675bd36dd142e227d8a2b3f5c2088" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.002359 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-8ttss_openstack-operators(4a4c6e25-e4fb-49b7-b757-e82e153fdb24)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" podUID="4a4c6e25-e4fb-49b7-b757-e82e153fdb24" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.028645 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.029087 4482 scope.go:117] "RemoveContainer" containerID="dea64f64df8c27f6f18ca39acd09b059cce5960e38dff65139934714bce1a004" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.029320 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-pv5cc_openstack-operators(4012508a-01a7-4e14-812e-7c70b350662a)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" podUID="4012508a-01a7-4e14-812e-7c70b350662a" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.037375 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.052798 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.052898 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.053309 4482 scope.go:117] "RemoveContainer" containerID="a2b4406fa2533f16687a31f8e25453646f560f66faf32691e66b91eea2863d4a" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.053528 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-xtvvg_openstack-operators(42e69f15-3b24-4d83-840e-3633c1bb87a3)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" podUID="42e69f15-3b24-4d83-840e-3633c1bb87a3" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.060149 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.060223 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.060560 4482 scope.go:117] "RemoveContainer" containerID="75a9f4f34a9acb0958436b585d2b3314cc10a57ea10296fd39dad9843c25bd20" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.060755 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-2x9vp_openstack-operators(6ad00506-e452-4f9e-91d3-24b4da4a7104)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" podUID="6ad00506-e452-4f9e-91d3-24b4da4a7104" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.076868 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.076913 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.077624 4482 scope.go:117] "RemoveContainer" containerID="7270281d10e76dffc5e940fcc49cbc2e1cbe302b3d771fee1756f9e131c565f7" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.077863 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-jq46h_openstack-operators(4d7476c3-dd4a-4e22-a018-e9a93d53ece5)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" podUID="4d7476c3-dd4a-4e22-a018-e9a93d53ece5" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.147643 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.147686 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.148260 4482 scope.go:117] "RemoveContainer" containerID="981bb162a01daf8aac98284b15a4584e21034cb1fb3ba17f0893fc6c50b0f5dc" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.148512 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-5zxlt_openstack-operators(4a627cd2-d42b-4958-a41c-230dd8246061)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" podUID="4a627cd2-d42b-4958-a41c-230dd8246061" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.177704 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.177744 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.178161 4482 scope.go:117] "RemoveContainer" containerID="4a7f35af51469b0d9fc1fa263adda034c3cc72f7f48f1b03956121c0e4c1c809" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.178383 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-2cfdk_openstack-operators(1af05cb8-e059-49d7-91dc-17bfecaec8db)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" podUID="1af05cb8-e059-49d7-91dc-17bfecaec8db" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.224941 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.224986 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.225333 4482 scope.go:117] "RemoveContainer" containerID="6dfbffe49d3a24d9953151df7822d569293d8b4439f6467cf59164ec26460279" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.225518 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-k8drr_openstack-operators(3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" podUID="3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.468809 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.469110 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.469696 4482 scope.go:117] "RemoveContainer" containerID="0fd8bab38f28284175690b986e5ea11137f63049e9e0a611702f43dd5535a79a" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.469925 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-zdvcm_openstack-operators(4be124a3-1fa2-455c-834f-01e66fc326b3)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" podUID="4be124a3-1fa2-455c-834f-01e66fc326b3" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.541508 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.541557 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.542018 4482 scope.go:117] "RemoveContainer" containerID="8df168e715b6281d9f995e3dac1568ffb63a30b76eaa7abe714c8d40389ec639" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.542291 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-m7kcf_openstack-operators(4ab40028-48ce-48f7-bbd4-97b1bed0cf4c)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" podUID="4ab40028-48ce-48f7-bbd4-97b1bed0cf4c" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.865481 4482 scope.go:117] "RemoveContainer" containerID="86add79ccfa7d6add3237e5ffd6cdd4a5cb0b4fd61fee29f78bc4656aee57be1" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.865940 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(1a79608b-f242-45d3-aa13-73c0d7bfd626)\"" pod="openstack/kube-state-metrics-0" podUID="1a79608b-f242-45d3-aa13-73c0d7bfd626" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.866791 4482 scope.go:117] "RemoveContainer" containerID="a2b4406fa2533f16687a31f8e25453646f560f66faf32691e66b91eea2863d4a" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.867149 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-xtvvg_openstack-operators(42e69f15-3b24-4d83-840e-3633c1bb87a3)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" podUID="42e69f15-3b24-4d83-840e-3633c1bb87a3" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.867597 4482 scope.go:117] "RemoveContainer" containerID="dea64f64df8c27f6f18ca39acd09b059cce5960e38dff65139934714bce1a004" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.867930 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-pv5cc_openstack-operators(4012508a-01a7-4e14-812e-7c70b350662a)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" podUID="4012508a-01a7-4e14-812e-7c70b350662a" Nov 25 07:11:19 crc kubenswrapper[4482]: I1125 07:11:19.868295 4482 scope.go:117] "RemoveContainer" containerID="011637fc673149702bba91b2f72de5945df2d05318e9d6623d3edb14afe9c363" Nov 25 07:11:19 crc kubenswrapper[4482]: E1125 07:11:19.868756 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-m5rfx_openstack-operators(9dbafcad-7706-4390-9745-238418d06f5c)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" podUID="9dbafcad-7706-4390-9745-238418d06f5c" Nov 25 07:11:20 crc kubenswrapper[4482]: I1125 07:11:20.373025 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 07:11:20 crc kubenswrapper[4482]: I1125 07:11:20.373351 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 07:11:20 crc kubenswrapper[4482]: I1125 07:11:20.374184 4482 scope.go:117] "RemoveContainer" containerID="f950c3b0a856af675ea32b1c79408d9b568dc2750931dbace2af882edf07ac76" Nov 25 07:11:20 crc kubenswrapper[4482]: E1125 07:11:20.374506 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-lx6v6_openstack-operators(3a5cd60b-13ff-44ea-b256-1e05d03912e4)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" podUID="3a5cd60b-13ff-44ea-b256-1e05d03912e4" Nov 25 07:11:21 crc kubenswrapper[4482]: I1125 07:11:21.007491 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:11:21 crc kubenswrapper[4482]: I1125 07:11:21.640130 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-hv5nt" Nov 25 07:11:22 crc kubenswrapper[4482]: I1125 07:11:22.723217 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-tlwch" Nov 25 07:11:23 crc kubenswrapper[4482]: I1125 07:11:23.259045 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 07:11:23 crc kubenswrapper[4482]: I1125 07:11:23.270077 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 07:11:23 crc kubenswrapper[4482]: I1125 07:11:23.330014 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 07:11:23 crc kubenswrapper[4482]: I1125 07:11:23.330088 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 07:11:23 crc kubenswrapper[4482]: I1125 07:11:23.330617 4482 scope.go:117] "RemoveContainer" containerID="91a7b825b34d9ff52731f71622c12b5ee57409d2d673c64c618786df6baecf54" Nov 25 07:11:23 crc kubenswrapper[4482]: E1125 07:11:23.330836 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-7cd5954d9-kmdnq_openstack-operators(004e08bd-55ee-4702-88b6-69bd67a32610)\"" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" podUID="004e08bd-55ee-4702-88b6-69bd67a32610" Nov 25 07:11:23 crc kubenswrapper[4482]: I1125 07:11:23.362344 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 07:11:23 crc kubenswrapper[4482]: I1125 07:11:23.526612 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 07:11:23 crc kubenswrapper[4482]: I1125 07:11:23.526812 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 07:11:23 crc kubenswrapper[4482]: I1125 07:11:23.658786 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-29djj" Nov 25 07:11:23 crc kubenswrapper[4482]: I1125 07:11:23.901672 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 07:11:23 crc kubenswrapper[4482]: I1125 07:11:23.903640 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zd8xz" Nov 25 07:11:23 crc kubenswrapper[4482]: I1125 07:11:23.909756 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 25 07:11:24 crc kubenswrapper[4482]: I1125 07:11:24.143860 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-wktl9" Nov 25 07:11:24 crc kubenswrapper[4482]: I1125 07:11:24.191789 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 07:11:24 crc kubenswrapper[4482]: I1125 07:11:24.260454 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 07:11:24 crc kubenswrapper[4482]: I1125 07:11:24.549143 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-84zf2" Nov 25 07:11:24 crc kubenswrapper[4482]: I1125 07:11:24.770590 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 07:11:24 crc kubenswrapper[4482]: I1125 07:11:24.814259 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.100149 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.181131 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.197447 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.201448 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.205451 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.334096 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.358789 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.402199 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nc9ld" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.449417 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.527807 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.528703 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.552709 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.645133 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.768022 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.802608 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.811895 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.833600 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.851966 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kc4sk" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.917021 4482 generic.go:334] "Generic (PLEG): container finished" podID="39a79591-2e93-478b-8091-e4ea6dca13b1" containerID="f5948215d23d4ef6a772c1f328055595057e8b4b5793f8964b78579b8b885b4a" exitCode=0 Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.917059 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" event={"ID":"39a79591-2e93-478b-8091-e4ea6dca13b1","Type":"ContainerDied","Data":"f5948215d23d4ef6a772c1f328055595057e8b4b5793f8964b78579b8b885b4a"} Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.937410 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-224hz" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.995570 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rrmbj" Nov 25 07:11:25 crc kubenswrapper[4482]: I1125 07:11:25.999451 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.053086 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.132243 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.373683 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.407762 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.495654 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.541724 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.543762 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.570444 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.645152 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.710858 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.741694 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.770433 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.796672 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.819129 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.834794 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.854254 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.854331 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.854943 4482 scope.go:117] "RemoveContainer" containerID="86add79ccfa7d6add3237e5ffd6cdd4a5cb0b4fd61fee29f78bc4656aee57be1" Nov 25 07:11:26 crc kubenswrapper[4482]: E1125 07:11:26.855271 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(1a79608b-f242-45d3-aa13-73c0d7bfd626)\"" pod="openstack/kube-state-metrics-0" podUID="1a79608b-f242-45d3-aa13-73c0d7bfd626" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.896541 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.929649 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.929725 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.967978 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 07:11:26 crc kubenswrapper[4482]: I1125 07:11:26.970201 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.033654 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.055588 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.057576 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-2hc7q" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.076893 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.166318 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.167000 4482 scope.go:117] "RemoveContainer" containerID="ea803944fe17974d564d811e0e51fb8c7b8465011e56e6aaff8e90c5536a9cf1" Nov 25 07:11:27 crc kubenswrapper[4482]: E1125 07:11:27.167227 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-6b7b9ccd57-7v896_metallb-system(61f162c1-bcc6-4098-86f3-7cff5790a2f3)\"" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.195012 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.201480 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-rmbqp" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.208886 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.221120 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.225733 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-v98p2" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.239998 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.276514 4482 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.294333 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.306984 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.325620 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fv2fv" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.332832 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.352209 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.367241 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.368359 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.387397 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.428593 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.432490 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.441967 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-bootstrap-combined-ca-bundle\") pod \"39a79591-2e93-478b-8091-e4ea6dca13b1\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.442347 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-ssh-key\") pod \"39a79591-2e93-478b-8091-e4ea6dca13b1\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.442467 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp7zg\" (UniqueName: \"kubernetes.io/projected/39a79591-2e93-478b-8091-e4ea6dca13b1-kube-api-access-bp7zg\") pod \"39a79591-2e93-478b-8091-e4ea6dca13b1\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.442565 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-inventory\") pod \"39a79591-2e93-478b-8091-e4ea6dca13b1\" (UID: \"39a79591-2e93-478b-8091-e4ea6dca13b1\") " Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.444749 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.448804 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "39a79591-2e93-478b-8091-e4ea6dca13b1" (UID: "39a79591-2e93-478b-8091-e4ea6dca13b1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.448841 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a79591-2e93-478b-8091-e4ea6dca13b1-kube-api-access-bp7zg" (OuterVolumeSpecName: "kube-api-access-bp7zg") pod "39a79591-2e93-478b-8091-e4ea6dca13b1" (UID: "39a79591-2e93-478b-8091-e4ea6dca13b1"). InnerVolumeSpecName "kube-api-access-bp7zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.465343 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-inventory" (OuterVolumeSpecName: "inventory") pod "39a79591-2e93-478b-8091-e4ea6dca13b1" (UID: "39a79591-2e93-478b-8091-e4ea6dca13b1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.469363 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "39a79591-2e93-478b-8091-e4ea6dca13b1" (UID: "39a79591-2e93-478b-8091-e4ea6dca13b1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.515097 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.545082 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.545115 4482 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.545125 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39a79591-2e93-478b-8091-e4ea6dca13b1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.545133 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp7zg\" (UniqueName: \"kubernetes.io/projected/39a79591-2e93-478b-8091-e4ea6dca13b1-kube-api-access-bp7zg\") on node \"crc\" DevicePath \"\"" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.616970 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.651585 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.713868 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.717056 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.721295 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-w2594" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.781104 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.794905 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.820380 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.851819 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.907973 4482 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.912292 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=37.912277574 podStartE2EDuration="37.912277574s" podCreationTimestamp="2025-11-25 07:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:11:13.058046776 +0000 UTC m=+1447.546278034" watchObservedRunningTime="2025-11-25 07:11:27.912277574 +0000 UTC m=+1462.400508833" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.918837 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.918913 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.924787 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.937996 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" event={"ID":"39a79591-2e93-478b-8091-e4ea6dca13b1","Type":"ContainerDied","Data":"998b58dd270595e4db07847c06a0118750806a26eb9106770959317485f42cf6"} Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.938024 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="998b58dd270595e4db07847c06a0118750806a26eb9106770959317485f42cf6" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.938045 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dmxxn" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.945414 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.947720 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.947693604 podStartE2EDuration="14.947693604s" podCreationTimestamp="2025-11-25 07:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:11:27.942077166 +0000 UTC m=+1462.430308425" watchObservedRunningTime="2025-11-25 07:11:27.947693604 +0000 UTC m=+1462.435924863" Nov 25 07:11:27 crc kubenswrapper[4482]: I1125 07:11:27.955320 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.000983 4482 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.028371 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.168886 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.181447 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.186826 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.194037 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.233819 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.252057 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.280262 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-drr8w" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.301843 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-6zp6c" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.363561 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.363561 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.370946 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.383162 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.383372 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.391412 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.487586 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.513769 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.531676 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.539755 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.543837 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.560187 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.570720 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.575994 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.661560 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.694103 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.719361 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.724853 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.729915 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.755322 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.802443 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.855290 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zp9dx" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.872351 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.898007 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.899503 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.969686 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.983132 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 07:11:28 crc kubenswrapper[4482]: I1125 07:11:28.983373 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.005030 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.042637 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-x2fgb" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.053251 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.062919 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.069836 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-jhxwv" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.081836 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zvdsz" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.206143 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.223623 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2s94l" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.260391 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.267436 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.268806 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.297778 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.302524 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.319014 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.336395 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.354673 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-lm9gw" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.382054 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-tb7vq" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.533109 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.544620 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-s25q8" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.581332 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-czjx2" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.598939 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.602113 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.603426 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.610557 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.623334 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.676369 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.679615 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.712866 4482 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.713176 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.764801 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.813357 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.832964 4482 scope.go:117] "RemoveContainer" containerID="75a9f4f34a9acb0958436b585d2b3314cc10a57ea10296fd39dad9843c25bd20" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.835042 4482 scope.go:117] "RemoveContainer" containerID="5f13a595a8fde1fca1f51a078a64b6b6ddb8bfde20e101320ce9b3285132e575" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.835325 4482 scope.go:117] "RemoveContainer" containerID="1579a4ba95fe0a43f4aab511be88eea30393621c1e2db18af4bf76abe2e434a9" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.849587 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-7nnr6" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.927585 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 07:11:29 crc kubenswrapper[4482]: I1125 07:11:29.966072 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nl4pz" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.018750 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.081807 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.132106 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.181116 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.184101 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.205942 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.221410 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 07:11:30 crc kubenswrapper[4482]: E1125 07:11:30.330309 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ad00506_e452_4f9e_91d3_24b4da4a7104.slice/crio-conmon-b0c7772f2272802143d9051b8f4b410c7acbf15c4a723239972813b704ff9a8a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2375b89e_398f_45d4_badc_1980cfcda4a1.slice/crio-fc5f5b0ec47a12b831de524fdf0e8d2cc79a240bc2ac9c1898c7e5930f0ad381.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ad00506_e452_4f9e_91d3_24b4da4a7104.slice/crio-b0c7772f2272802143d9051b8f4b410c7acbf15c4a723239972813b704ff9a8a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod337411b1_ff37_4370_ad36_415f816f5d07.slice/crio-conmon-31a0ef66db67fdd40b262bd509618b6e5e1ff7143eee419902ae9f2a61145dfa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod337411b1_ff37_4370_ad36_415f816f5d07.slice/crio-31a0ef66db67fdd40b262bd509618b6e5e1ff7143eee419902ae9f2a61145dfa.scope\": RecentStats: unable to find data in memory cache]" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.332854 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.399096 4482 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2sx5j" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.403303 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.403927 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.415028 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.428604 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.475070 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.487730 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.514908 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.534349 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.567912 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.586655 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.632600 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.648060 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.658788 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.678645 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.709783 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.775746 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.812970 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.813000 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mwcmx" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.813229 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.822937 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.832198 4482 scope.go:117] "RemoveContainer" containerID="4aed535fa2d5ffa019e1b430155d9e22df28556b81459a1bd72596ea0f9e8e4d" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.832600 4482 scope.go:117] "RemoveContainer" containerID="4a7f35af51469b0d9fc1fa263adda034c3cc72f7f48f1b03956121c0e4c1c809" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.832813 4482 scope.go:117] "RemoveContainer" containerID="0fd8bab38f28284175690b986e5ea11137f63049e9e0a611702f43dd5535a79a" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.833154 4482 scope.go:117] "RemoveContainer" containerID="6dfbffe49d3a24d9953151df7822d569293d8b4439f6467cf59164ec26460279" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.833687 4482 scope.go:117] "RemoveContainer" containerID="4af07a23b623531b729d59ed36f26a3e18a0afe0cb88163f227964398415d548" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.880041 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.944380 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.992201 4482 generic.go:334] "Generic (PLEG): container finished" podID="2375b89e-398f-45d4-badc-1980cfcda4a1" containerID="fc5f5b0ec47a12b831de524fdf0e8d2cc79a240bc2ac9c1898c7e5930f0ad381" exitCode=1 Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.992286 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" event={"ID":"2375b89e-398f-45d4-badc-1980cfcda4a1","Type":"ContainerDied","Data":"fc5f5b0ec47a12b831de524fdf0e8d2cc79a240bc2ac9c1898c7e5930f0ad381"} Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.992786 4482 scope.go:117] "RemoveContainer" containerID="1579a4ba95fe0a43f4aab511be88eea30393621c1e2db18af4bf76abe2e434a9" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.995428 4482 generic.go:334] "Generic (PLEG): container finished" podID="6ad00506-e452-4f9e-91d3-24b4da4a7104" containerID="b0c7772f2272802143d9051b8f4b410c7acbf15c4a723239972813b704ff9a8a" exitCode=1 Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.995566 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" event={"ID":"6ad00506-e452-4f9e-91d3-24b4da4a7104","Type":"ContainerDied","Data":"b0c7772f2272802143d9051b8f4b410c7acbf15c4a723239972813b704ff9a8a"} Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.997419 4482 scope.go:117] "RemoveContainer" containerID="fc5f5b0ec47a12b831de524fdf0e8d2cc79a240bc2ac9c1898c7e5930f0ad381" Nov 25 07:11:30 crc kubenswrapper[4482]: I1125 07:11:30.998129 4482 scope.go:117] "RemoveContainer" containerID="b0c7772f2272802143d9051b8f4b410c7acbf15c4a723239972813b704ff9a8a" Nov 25 07:11:30 crc kubenswrapper[4482]: E1125 07:11:30.998410 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-2qkzx_openstack-operators(2375b89e-398f-45d4-badc-1980cfcda4a1)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" podUID="2375b89e-398f-45d4-badc-1980cfcda4a1" Nov 25 07:11:30 crc kubenswrapper[4482]: E1125 07:11:30.998566 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-2x9vp_openstack-operators(6ad00506-e452-4f9e-91d3-24b4da4a7104)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" podUID="6ad00506-e452-4f9e-91d3-24b4da4a7104" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.002559 4482 generic.go:334] "Generic (PLEG): container finished" podID="337411b1-ff37-4370-ad36-415f816f5d07" containerID="31a0ef66db67fdd40b262bd509618b6e5e1ff7143eee419902ae9f2a61145dfa" exitCode=1 Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.002687 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" event={"ID":"337411b1-ff37-4370-ad36-415f816f5d07","Type":"ContainerDied","Data":"31a0ef66db67fdd40b262bd509618b6e5e1ff7143eee419902ae9f2a61145dfa"} Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.003765 4482 scope.go:117] "RemoveContainer" containerID="31a0ef66db67fdd40b262bd509618b6e5e1ff7143eee419902ae9f2a61145dfa" Nov 25 07:11:31 crc kubenswrapper[4482]: E1125 07:11:31.004052 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-4mr9n_openstack-operators(337411b1-ff37-4370-ad36-415f816f5d07)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" podUID="337411b1-ff37-4370-ad36-415f816f5d07" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.028881 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.068303 4482 scope.go:117] "RemoveContainer" containerID="75a9f4f34a9acb0958436b585d2b3314cc10a57ea10296fd39dad9843c25bd20" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.137216 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.156614 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.165543 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.196520 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.199440 4482 scope.go:117] "RemoveContainer" containerID="5f13a595a8fde1fca1f51a078a64b6b6ddb8bfde20e101320ce9b3285132e575" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.218436 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.218642 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.253739 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.299443 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.310162 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.331368 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.336889 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.364434 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-nrnxx" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.405964 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.406240 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.410658 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.453303 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-zh4xs" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.465205 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.486027 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.517607 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.521373 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.529756 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.533877 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-bfsvd" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.553306 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.572553 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.585405 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.618420 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.627026 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.661908 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.680042 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-q59mz" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.704221 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.831258 4482 scope.go:117] "RemoveContainer" containerID="011637fc673149702bba91b2f72de5945df2d05318e9d6623d3edb14afe9c363" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.831362 4482 scope.go:117] "RemoveContainer" containerID="85e8e95dd9824134268f11ce764c890d570df220b80cf2a24bf08412db33ec3c" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.831505 4482 scope.go:117] "RemoveContainer" containerID="cc4ecf1453c70746dc1deb1d49326fadc6ec828989acc82b10c7a874225f7a03" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.831681 4482 scope.go:117] "RemoveContainer" containerID="0c25c997d297debae124c39c6aa0dfd5090e23447e68f6f16d9eb522386acffb" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.944671 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 07:11:31 crc kubenswrapper[4482]: I1125 07:11:31.960300 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.002371 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.013220 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" event={"ID":"3ad7ed45-1ec7-4df0-99a6-d4b7bb56e01a","Type":"ContainerStarted","Data":"7b3712775e22d102c7a7902fd9a44f60319120dd39439d1c7a976fe4a68711ef"} Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.013554 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.016490 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" event={"ID":"1af05cb8-e059-49d7-91dc-17bfecaec8db","Type":"ContainerStarted","Data":"f6ca72dd5309d1876b1a4e2e99fe5519f6fa41e0a4e108304eb8f21bb7b3e4b5"} Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.017351 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.023798 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" event={"ID":"4be124a3-1fa2-455c-834f-01e66fc326b3","Type":"ContainerStarted","Data":"9d0f3836d680aa508fd4f42a04e4047fcd4a61efb8ea1859130244f294736a56"} Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.024006 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.032034 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" event={"ID":"d0b2883e-6d53-465c-ba0c-45173ff59d4b","Type":"ContainerStarted","Data":"a4fa65f5c666c8c1dea486ce1aca33d1e819c4eca70efe1e45f7acc6c593c395"} Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.032845 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.037359 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" event={"ID":"a2dcdd81-a863-4453-b1b6-e1824d5444b6","Type":"ContainerStarted","Data":"fdf11cf7de25ae71c017fd9cd306bf50aa5909d53db43028a92aab5871f08839"} Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.037820 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.040396 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.070589 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.122421 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.155056 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.199484 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.334104 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.384271 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.396345 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.403892 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.408030 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.418429 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.464832 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.481066 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.482798 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.498757 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.513625 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.514253 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.533332 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.553682 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.592111 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.595488 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-24qbv" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.603974 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.618967 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-59d6s" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.629154 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.672048 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.701306 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.728580 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.739686 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.746574 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.749386 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.898605 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.915271 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.930064 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 07:11:32 crc kubenswrapper[4482]: I1125 07:11:32.941570 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.002662 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.033037 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.038995 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.043843 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.059486 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" event={"ID":"20c9d02f-1cbc-4c66-84ff-7cbf40bac507","Type":"ContainerStarted","Data":"48eaaf93d66cd7696a55525661237ce1f5d378be8cb57689d29d0dea2c6d1f45"} Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.059961 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.061556 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" event={"ID":"3ec6220d-a590-404d-a427-98b94a3910c8","Type":"ContainerStarted","Data":"fd54b8f314f646e6dd7ed764dce242c21ff88dd488a184997b1e2d3c809066b8"} Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.061815 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.063691 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" event={"ID":"9dbafcad-7706-4390-9745-238418d06f5c","Type":"ContainerStarted","Data":"b1f43f295c0a1cb14f0565560d8dd927054aa2387289362fe1c04db183f1682e"} Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.063859 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.066407 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" event={"ID":"4754fff5-c20f-42c5-8c10-bb9975919bf3","Type":"ContainerStarted","Data":"4c56b6fe94e3637c29b8c2799e423eea8017b6de6ef7721e3d352182e8022b33"} Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.067141 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.095843 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.097918 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.163649 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.197682 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4gkl5" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.219926 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.261587 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.286220 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-j4g52" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.315897 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.326940 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.403569 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-f8qvp" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.436281 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.463456 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.593637 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-bnrld" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.601745 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.623231 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.625152 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.639480 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-78v5p" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.731297 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.734743 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.807875 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.824618 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.831297 4482 scope.go:117] "RemoveContainer" containerID="4544e88eab057900acfa260ed8c505bdb7ca006efb9b2931c6a377490ac80fc8" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.831720 4482 scope.go:117] "RemoveContainer" containerID="a2b4406fa2533f16687a31f8e25453646f560f66faf32691e66b91eea2863d4a" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.831964 4482 scope.go:117] "RemoveContainer" containerID="91a7b825b34d9ff52731f71622c12b5ee57409d2d673c64c618786df6baecf54" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.832127 4482 scope.go:117] "RemoveContainer" containerID="f950c3b0a856af675ea32b1c79408d9b568dc2750931dbace2af882edf07ac76" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.832359 4482 scope.go:117] "RemoveContainer" containerID="78eae792262bd006b19f6db44d464dd88ae675bd36dd142e227d8a2b3f5c2088" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.832463 4482 scope.go:117] "RemoveContainer" containerID="8df168e715b6281d9f995e3dac1568ffb63a30b76eaa7abe714c8d40389ec639" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.832559 4482 scope.go:117] "RemoveContainer" containerID="dea64f64df8c27f6f18ca39acd09b059cce5960e38dff65139934714bce1a004" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.836623 4482 scope.go:117] "RemoveContainer" containerID="981bb162a01daf8aac98284b15a4584e21034cb1fb3ba17f0893fc6c50b0f5dc" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.845674 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.847261 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.877832 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.884949 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.885122 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 07:11:33 crc kubenswrapper[4482]: I1125 07:11:33.999233 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.025654 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.061938 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-t5vb8" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.083597 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.104153 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.140282 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.164127 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.179098 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.210275 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.213597 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.272083 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hk9xb" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.306037 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.318145 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.331894 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.441918 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.466384 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-2b78f" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.488342 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.488780 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.549263 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.551752 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2s7cr" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.595337 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.597498 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.613822 4482 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.621834 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.640869 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.648552 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.663714 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.687748 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.696569 4482 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.732568 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.767949 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.814499 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.831212 4482 scope.go:117] "RemoveContainer" containerID="7270281d10e76dffc5e940fcc49cbc2e1cbe302b3d771fee1756f9e131c565f7" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.834748 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.838722 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.940988 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.961129 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.968891 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 07:11:34 crc kubenswrapper[4482]: I1125 07:11:34.969190 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.013029 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-n574j" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.013991 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.014837 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.017522 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.036722 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.039309 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.086461 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" event={"ID":"4d7476c3-dd4a-4e22-a018-e9a93d53ece5","Type":"ContainerStarted","Data":"951802a13f8f1b0b3459ebebbff4702c487e6a9552ee14892ae14735eae0f843"} Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.086679 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.088115 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" event={"ID":"3a5cd60b-13ff-44ea-b256-1e05d03912e4","Type":"ContainerStarted","Data":"6be17ed309ef7d743b2479849981b23a71c6835b112706f8c232207cb553c3c2"} Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.088490 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.090902 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" event={"ID":"f3eb6724-3ab3-4027-b8e6-3d90c403f13a","Type":"ContainerStarted","Data":"a08ddff2943edae15d460102d9fec34a4a4a5818ea3a911eeb444cf07990dd02"} Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.091375 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.093982 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" event={"ID":"4ab40028-48ce-48f7-bbd4-97b1bed0cf4c","Type":"ContainerStarted","Data":"c07f74edc4ec9ff3376a6378af026bce0e3fbb46c01b1708ecd4f5c87c2fd33c"} Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.094451 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.095964 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" event={"ID":"004e08bd-55ee-4702-88b6-69bd67a32610","Type":"ContainerStarted","Data":"1784ff33dfee865b9866bea76b645d5687f83b97d4b32405908d715ddce08c5f"} Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.096375 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.101417 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" event={"ID":"42e69f15-3b24-4d83-840e-3633c1bb87a3","Type":"ContainerStarted","Data":"882d2104a65fcbe85c6b3125679693238361c1ecfec939e20889572c97a8b25b"} Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.101820 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.105687 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" event={"ID":"4012508a-01a7-4e14-812e-7c70b350662a","Type":"ContainerStarted","Data":"cf0355bdf83e03dbb3e8570da61523c6267b347fe8cba245fdaa6593d455f5c2"} Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.105866 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.112087 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" event={"ID":"4a627cd2-d42b-4958-a41c-230dd8246061","Type":"ContainerStarted","Data":"37bbd5383d6cfd5a2b8573fdce61754575191a768320628a368c315c7b9e782d"} Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.112298 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.115161 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4nvnb" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.116660 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" event={"ID":"4a4c6e25-e4fb-49b7-b757-e82e153fdb24","Type":"ContainerStarted","Data":"a5f18ec1c64fc8a3b7cf8901e1261eb825cd7c12dc788452ff73f7b19befdb5b"} Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.116849 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.182043 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.197966 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.234711 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.504661 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.535750 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.541653 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.558239 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.646835 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.648547 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.652402 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.658013 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.658736 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.658868 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.666779 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-z2r8l" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.712665 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.718111 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.752332 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.756938 4482 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.757263 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8cfca3f69cb75a4b54631dcbff6934041cade0206b01ae122006f00eac358bc2" gracePeriod=5 Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.770673 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.798334 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.816333 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.868746 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 07:11:35 crc kubenswrapper[4482]: I1125 07:11:35.900394 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.021484 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.048790 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.065722 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.249575 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.272629 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.312187 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.359348 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.387992 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.396301 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.437382 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.485687 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.498368 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.533558 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.545402 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.570227 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.595416 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.620396 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.669426 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.692244 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.717959 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.757313 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.762517 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.762647 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.781307 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.790674 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.849024 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.880087 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.890235 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 07:11:36 crc kubenswrapper[4482]: I1125 07:11:36.982103 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.006129 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-kkk2l" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.006143 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.031773 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.069670 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.256432 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.341408 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.375875 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.460911 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-r4bvk" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.509068 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kdhzt" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.543586 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-95dlg" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.560248 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.568536 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.716079 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.801657 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-p6bnt" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.832023 4482 scope.go:117] "RemoveContainer" containerID="ea803944fe17974d564d811e0e51fb8c7b8465011e56e6aaff8e90c5536a9cf1" Nov 25 07:11:37 crc kubenswrapper[4482]: E1125 07:11:37.832277 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-6b7b9ccd57-7v896_metallb-system(61f162c1-bcc6-4098-86f3-7cff5790a2f3)\"" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" podUID="61f162c1-bcc6-4098-86f3-7cff5790a2f3" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.838135 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.861068 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.872658 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 07:11:37 crc kubenswrapper[4482]: I1125 07:11:37.988788 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-vft5t" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.037047 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.223455 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.227274 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.277849 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jjmk9" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.364305 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.446265 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.503397 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.572780 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.602727 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-xthjx" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.603212 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-svglr" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.618485 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-r6cc4" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.621364 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-ngzzq" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.643709 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-t4dwf" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.672281 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.704120 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.730854 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.730902 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.731523 4482 scope.go:117] "RemoveContainer" containerID="fc5f5b0ec47a12b831de524fdf0e8d2cc79a240bc2ac9c1898c7e5930f0ad381" Nov 25 07:11:38 crc kubenswrapper[4482]: E1125 07:11:38.731848 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-2qkzx_openstack-operators(2375b89e-398f-45d4-badc-1980cfcda4a1)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" podUID="2375b89e-398f-45d4-badc-1980cfcda4a1" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.781367 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.801981 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-tzkbq" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.868698 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.874054 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.898278 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-5pr4g" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.966748 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 07:11:38 crc kubenswrapper[4482]: I1125 07:11:38.973084 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-m5rfx" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.011279 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-8ttss" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.030445 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-pv5cc" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.060882 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-xtvvg" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.061597 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.061975 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.062240 4482 scope.go:117] "RemoveContainer" containerID="b0c7772f2272802143d9051b8f4b410c7acbf15c4a723239972813b704ff9a8a" Nov 25 07:11:39 crc kubenswrapper[4482]: E1125 07:11:39.062613 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-2x9vp_openstack-operators(6ad00506-e452-4f9e-91d3-24b4da4a7104)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" podUID="6ad00506-e452-4f9e-91d3-24b4da4a7104" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.066211 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.081579 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-jq46h" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.098787 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.116142 4482 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-8v6ms" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.119341 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.119388 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.138436 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.159431 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-5zxlt" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.166532 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.182668 4482 scope.go:117] "RemoveContainer" containerID="b0c7772f2272802143d9051b8f4b410c7acbf15c4a723239972813b704ff9a8a" Nov 25 07:11:39 crc kubenswrapper[4482]: E1125 07:11:39.183087 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-2x9vp_openstack-operators(6ad00506-e452-4f9e-91d3-24b4da4a7104)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" podUID="6ad00506-e452-4f9e-91d3-24b4da4a7104" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.183545 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-2cfdk" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.239348 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-k8drr" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.303059 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.379271 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.397838 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.398511 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.434616 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.477674 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-zdvcm" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.542722 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-m7kcf" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.573111 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.697342 4482 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-qrwf8" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.744109 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.805669 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.817716 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.831323 4482 scope.go:117] "RemoveContainer" containerID="86add79ccfa7d6add3237e5ffd6cdd4a5cb0b4fd61fee29f78bc4656aee57be1" Nov 25 07:11:39 crc kubenswrapper[4482]: I1125 07:11:39.894694 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.043018 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7m8kq"] Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.049809 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-76cd-account-create-6zd5h"] Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.055795 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7m8kq"] Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.062531 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-d8r5j"] Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.068225 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-76cd-account-create-6zd5h"] Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.073409 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-d8r5j"] Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.207078 4482 generic.go:334] "Generic (PLEG): container finished" podID="8b848a1b-214e-49da-ab4b-5eb3150fc85f" containerID="330890bab35de55892d48967cdb2b785d93c19b4774bf25ec16d751942b078e5" exitCode=1 Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.207226 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-bcpsp" event={"ID":"8b848a1b-214e-49da-ab4b-5eb3150fc85f","Type":"ContainerDied","Data":"330890bab35de55892d48967cdb2b785d93c19b4774bf25ec16d751942b078e5"} Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.208188 4482 scope.go:117] "RemoveContainer" containerID="330890bab35de55892d48967cdb2b785d93c19b4774bf25ec16d751942b078e5" Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.217495 4482 generic.go:334] "Generic (PLEG): container finished" podID="724fe0c2-5ef8-48a9-8c39-c73b17e6fef2" containerID="50f859be50538b1dcef72ee3c5e778001e4ded9a483d340971535d73f32b7e0d" exitCode=1 Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.217520 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rvzmp" event={"ID":"724fe0c2-5ef8-48a9-8c39-c73b17e6fef2","Type":"ContainerDied","Data":"50f859be50538b1dcef72ee3c5e778001e4ded9a483d340971535d73f32b7e0d"} Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.218248 4482 scope.go:117] "RemoveContainer" containerID="50f859be50538b1dcef72ee3c5e778001e4ded9a483d340971535d73f32b7e0d" Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.238655 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.380617 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-lx6v6" Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.942109 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 07:11:40 crc kubenswrapper[4482]: I1125 07:11:40.942404 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.021432 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e764-account-create-492vx"] Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.054575 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e764-account-create-492vx"] Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.082970 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.083118 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.083241 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.083270 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.083337 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.083397 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.083430 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.083467 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.083546 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.083878 4482 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.083894 4482 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.083902 4482 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.083910 4482 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.093674 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.186711 4482 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.231301 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-bcpsp" event={"ID":"8b848a1b-214e-49da-ab4b-5eb3150fc85f","Type":"ContainerStarted","Data":"5df62af0a3a86b6dfaf95c97f72fe2fe99a653d0081ce2e0bedcf54d503c86de"} Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.234196 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1a79608b-f242-45d3-aa13-73c0d7bfd626","Type":"ContainerStarted","Data":"d35097868fc07e70a6104a6caae9f1a861f0bece04ce580956910066b281ae9c"} Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.234501 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.236701 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rvzmp" event={"ID":"724fe0c2-5ef8-48a9-8c39-c73b17e6fef2","Type":"ContainerStarted","Data":"77919002e35137a60cdbe03d63e50c8d86b8176763c97b91fc6c14bc2001cd93"} Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.238927 4482 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.238986 4482 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8cfca3f69cb75a4b54631dcbff6934041cade0206b01ae122006f00eac358bc2" exitCode=137 Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.239043 4482 scope.go:117] "RemoveContainer" containerID="8cfca3f69cb75a4b54631dcbff6934041cade0206b01ae122006f00eac358bc2" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.239078 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.258353 4482 scope.go:117] "RemoveContainer" containerID="8cfca3f69cb75a4b54631dcbff6934041cade0206b01ae122006f00eac358bc2" Nov 25 07:11:41 crc kubenswrapper[4482]: E1125 07:11:41.258665 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cfca3f69cb75a4b54631dcbff6934041cade0206b01ae122006f00eac358bc2\": container with ID starting with 8cfca3f69cb75a4b54631dcbff6934041cade0206b01ae122006f00eac358bc2 not found: ID does not exist" containerID="8cfca3f69cb75a4b54631dcbff6934041cade0206b01ae122006f00eac358bc2" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.258704 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cfca3f69cb75a4b54631dcbff6934041cade0206b01ae122006f00eac358bc2"} err="failed to get container status \"8cfca3f69cb75a4b54631dcbff6934041cade0206b01ae122006f00eac358bc2\": rpc error: code = NotFound desc = could not find container \"8cfca3f69cb75a4b54631dcbff6934041cade0206b01ae122006f00eac358bc2\": container with ID starting with 8cfca3f69cb75a4b54631dcbff6934041cade0206b01ae122006f00eac358bc2 not found: ID does not exist" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.843920 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da49643-084c-4726-ab3f-d640282105c3" path="/var/lib/kubelet/pods/6da49643-084c-4726-ab3f-d640282105c3/volumes" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.848353 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f5da866-34ec-4b01-826a-1f2061eb3fcc" path="/var/lib/kubelet/pods/9f5da866-34ec-4b01-826a-1f2061eb3fcc/volumes" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.850259 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b42ea052-21b5-407f-8d8d-f474f42e92ff" path="/var/lib/kubelet/pods/b42ea052-21b5-407f-8d8d-f474f42e92ff/volumes" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.852243 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f415cc2f-955d-4eef-bca2-2d990fc72f69" path="/var/lib/kubelet/pods/f415cc2f-955d-4eef-bca2-2d990fc72f69/volumes" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.853533 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.853775 4482 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.869876 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.869914 4482 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="cc01e20f-e600-4aab-86d3-9d79e5940e4e" Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.876225 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 07:11:41 crc kubenswrapper[4482]: I1125 07:11:41.876260 4482 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="cc01e20f-e600-4aab-86d3-9d79e5940e4e" Nov 25 07:11:42 crc kubenswrapper[4482]: I1125 07:11:42.831252 4482 scope.go:117] "RemoveContainer" containerID="31a0ef66db67fdd40b262bd509618b6e5e1ff7143eee419902ae9f2a61145dfa" Nov 25 07:11:42 crc kubenswrapper[4482]: E1125 07:11:42.831894 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-4mr9n_openstack-operators(337411b1-ff37-4370-ad36-415f816f5d07)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" podUID="337411b1-ff37-4370-ad36-415f816f5d07" Nov 25 07:11:43 crc kubenswrapper[4482]: I1125 07:11:43.339140 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-kmdnq" Nov 25 07:11:46 crc kubenswrapper[4482]: I1125 07:11:46.860284 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 07:11:48 crc kubenswrapper[4482]: I1125 07:11:48.755286 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-t6mdk" Nov 25 07:11:49 crc kubenswrapper[4482]: I1125 07:11:49.831127 4482 scope.go:117] "RemoveContainer" containerID="ea803944fe17974d564d811e0e51fb8c7b8465011e56e6aaff8e90c5536a9cf1" Nov 25 07:11:50 crc kubenswrapper[4482]: I1125 07:11:50.323981 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" event={"ID":"61f162c1-bcc6-4098-86f3-7cff5790a2f3","Type":"ContainerStarted","Data":"e46e3a6e4661411034bea425354f0c4d78ced6ee25c73b2aaac3d367dc1a0101"} Nov 25 07:11:50 crc kubenswrapper[4482]: I1125 07:11:50.324954 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 07:11:50 crc kubenswrapper[4482]: I1125 07:11:50.833857 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7mvql" Nov 25 07:11:52 crc kubenswrapper[4482]: I1125 07:11:52.830698 4482 scope.go:117] "RemoveContainer" containerID="fc5f5b0ec47a12b831de524fdf0e8d2cc79a240bc2ac9c1898c7e5930f0ad381" Nov 25 07:11:53 crc kubenswrapper[4482]: I1125 07:11:53.356459 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" event={"ID":"2375b89e-398f-45d4-badc-1980cfcda4a1","Type":"ContainerStarted","Data":"1df8587f1d45f4bcb5043e4ea687c7f373f2a01180f3c8d88b5dde82082ef636"} Nov 25 07:11:53 crc kubenswrapper[4482]: I1125 07:11:53.357095 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 07:11:53 crc kubenswrapper[4482]: I1125 07:11:53.831953 4482 scope.go:117] "RemoveContainer" containerID="31a0ef66db67fdd40b262bd509618b6e5e1ff7143eee419902ae9f2a61145dfa" Nov 25 07:11:53 crc kubenswrapper[4482]: I1125 07:11:53.832650 4482 scope.go:117] "RemoveContainer" containerID="b0c7772f2272802143d9051b8f4b410c7acbf15c4a723239972813b704ff9a8a" Nov 25 07:11:54 crc kubenswrapper[4482]: I1125 07:11:54.369832 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" event={"ID":"6ad00506-e452-4f9e-91d3-24b4da4a7104","Type":"ContainerStarted","Data":"cdb4fcebbc3cd359fef0a888eb0f49cb0bacd4f93b1e7b3924320e7e1b24a37f"} Nov 25 07:11:54 crc kubenswrapper[4482]: I1125 07:11:54.370261 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 07:11:54 crc kubenswrapper[4482]: I1125 07:11:54.371594 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4mr9n" event={"ID":"337411b1-ff37-4370-ad36-415f816f5d07","Type":"ContainerStarted","Data":"8ed88fa6a00b866627763cb3e4ead0f52702f115dbef0d47117a1d14ef8ba09c"} Nov 25 07:11:55 crc kubenswrapper[4482]: I1125 07:11:55.915941 4482 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 07:11:58 crc kubenswrapper[4482]: I1125 07:11:58.022424 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-ddgt4"] Nov 25 07:11:58 crc kubenswrapper[4482]: I1125 07:11:58.034081 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1193-account-create-2nz49"] Nov 25 07:11:58 crc kubenswrapper[4482]: I1125 07:11:58.041251 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-ddgt4"] Nov 25 07:11:58 crc kubenswrapper[4482]: I1125 07:11:58.047112 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1193-account-create-2nz49"] Nov 25 07:11:58 crc kubenswrapper[4482]: I1125 07:11:58.738011 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-2qkzx" Nov 25 07:11:59 crc kubenswrapper[4482]: I1125 07:11:59.061158 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2x9vp" Nov 25 07:11:59 crc kubenswrapper[4482]: I1125 07:11:59.227122 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 07:11:59 crc kubenswrapper[4482]: I1125 07:11:59.843534 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24002027-a259-4705-a0a0-9d2479988e23" path="/var/lib/kubelet/pods/24002027-a259-4705-a0a0-9d2479988e23/volumes" Nov 25 07:11:59 crc kubenswrapper[4482]: I1125 07:11:59.845219 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c26151e9-5ea6-4cd4-810c-e2d22aef5d7e" path="/var/lib/kubelet/pods/c26151e9-5ea6-4cd4-810c-e2d22aef5d7e/volumes" Nov 25 07:12:09 crc kubenswrapper[4482]: I1125 07:12:09.117915 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:12:09 crc kubenswrapper[4482]: I1125 07:12:09.118526 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.063890 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq"] Nov 25 07:12:11 crc kubenswrapper[4482]: E1125 07:12:11.064516 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" containerName="installer" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.064538 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" containerName="installer" Nov 25 07:12:11 crc kubenswrapper[4482]: E1125 07:12:11.064586 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a79591-2e93-478b-8091-e4ea6dca13b1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.064593 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a79591-2e93-478b-8091-e4ea6dca13b1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 07:12:11 crc kubenswrapper[4482]: E1125 07:12:11.064607 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.064613 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.064862 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.064881 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac6721d-3577-4cc2-876e-64a829e86b2b" containerName="installer" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.064905 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="39a79591-2e93-478b-8091-e4ea6dca13b1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.065860 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.070544 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.071349 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.071498 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.073107 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.088032 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98gmc\" (UniqueName: \"kubernetes.io/projected/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-kube-api-access-98gmc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq\" (UID: \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.088075 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq\" (UID: \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.088128 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq\" (UID: \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.118216 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq"] Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.190184 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98gmc\" (UniqueName: \"kubernetes.io/projected/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-kube-api-access-98gmc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq\" (UID: \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.190671 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq\" (UID: \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.190830 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq\" (UID: \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.198806 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq\" (UID: \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.205194 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98gmc\" (UniqueName: \"kubernetes.io/projected/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-kube-api-access-98gmc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq\" (UID: \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.207577 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq\" (UID: \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.382523 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:12:11 crc kubenswrapper[4482]: I1125 07:12:11.528241 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.043228 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dj7vv"] Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.045249 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.061135 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dj7vv"] Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.214029 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hjsg\" (UniqueName: \"kubernetes.io/projected/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-kube-api-access-7hjsg\") pod \"certified-operators-dj7vv\" (UID: \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\") " pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.214104 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-catalog-content\") pod \"certified-operators-dj7vv\" (UID: \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\") " pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.214143 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-utilities\") pod \"certified-operators-dj7vv\" (UID: \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\") " pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.298588 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq"] Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.315289 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hjsg\" (UniqueName: \"kubernetes.io/projected/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-kube-api-access-7hjsg\") pod \"certified-operators-dj7vv\" (UID: \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\") " pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.315342 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-catalog-content\") pod \"certified-operators-dj7vv\" (UID: \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\") " pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.315395 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-utilities\") pod \"certified-operators-dj7vv\" (UID: \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\") " pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.316931 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-catalog-content\") pod \"certified-operators-dj7vv\" (UID: \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\") " pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.316945 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-utilities\") pod \"certified-operators-dj7vv\" (UID: \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\") " pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.343738 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hjsg\" (UniqueName: \"kubernetes.io/projected/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-kube-api-access-7hjsg\") pod \"certified-operators-dj7vv\" (UID: \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\") " pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.370361 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.545783 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" event={"ID":"5369c6f0-a3ea-470c-bda2-abba45b2b4e6","Type":"ContainerStarted","Data":"27b7e7fe4449ab2e2e2373764cebed43f78dc3e51010076e3851024a030b6f0e"} Nov 25 07:12:12 crc kubenswrapper[4482]: I1125 07:12:12.728358 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dj7vv"] Nov 25 07:12:13 crc kubenswrapper[4482]: I1125 07:12:13.556260 4482 generic.go:334] "Generic (PLEG): container finished" podID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" containerID="3ff0602cab30e0f5e7b0effbcf1d20d1bf707782c6021346e269605d239b012a" exitCode=0 Nov 25 07:12:13 crc kubenswrapper[4482]: I1125 07:12:13.556369 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj7vv" event={"ID":"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7","Type":"ContainerDied","Data":"3ff0602cab30e0f5e7b0effbcf1d20d1bf707782c6021346e269605d239b012a"} Nov 25 07:12:13 crc kubenswrapper[4482]: I1125 07:12:13.556837 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj7vv" event={"ID":"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7","Type":"ContainerStarted","Data":"c97e900058aa4e2af9fb44cffca394f674f2d1c2685def08a07e996f5a2f72f8"} Nov 25 07:12:13 crc kubenswrapper[4482]: I1125 07:12:13.857811 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vp5dz"] Nov 25 07:12:13 crc kubenswrapper[4482]: I1125 07:12:13.860799 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:13 crc kubenswrapper[4482]: I1125 07:12:13.900941 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vp5dz"] Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.059441 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47d66957-13fe-4c90-b512-d8e8e56e5e29-utilities\") pod \"redhat-marketplace-vp5dz\" (UID: \"47d66957-13fe-4c90-b512-d8e8e56e5e29\") " pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.059513 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47d66957-13fe-4c90-b512-d8e8e56e5e29-catalog-content\") pod \"redhat-marketplace-vp5dz\" (UID: \"47d66957-13fe-4c90-b512-d8e8e56e5e29\") " pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.059776 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2nmk\" (UniqueName: \"kubernetes.io/projected/47d66957-13fe-4c90-b512-d8e8e56e5e29-kube-api-access-w2nmk\") pod \"redhat-marketplace-vp5dz\" (UID: \"47d66957-13fe-4c90-b512-d8e8e56e5e29\") " pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.173834 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2nmk\" (UniqueName: \"kubernetes.io/projected/47d66957-13fe-4c90-b512-d8e8e56e5e29-kube-api-access-w2nmk\") pod \"redhat-marketplace-vp5dz\" (UID: \"47d66957-13fe-4c90-b512-d8e8e56e5e29\") " pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.174070 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47d66957-13fe-4c90-b512-d8e8e56e5e29-utilities\") pod \"redhat-marketplace-vp5dz\" (UID: \"47d66957-13fe-4c90-b512-d8e8e56e5e29\") " pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.174202 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47d66957-13fe-4c90-b512-d8e8e56e5e29-catalog-content\") pod \"redhat-marketplace-vp5dz\" (UID: \"47d66957-13fe-4c90-b512-d8e8e56e5e29\") " pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.177835 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47d66957-13fe-4c90-b512-d8e8e56e5e29-utilities\") pod \"redhat-marketplace-vp5dz\" (UID: \"47d66957-13fe-4c90-b512-d8e8e56e5e29\") " pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.178142 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47d66957-13fe-4c90-b512-d8e8e56e5e29-catalog-content\") pod \"redhat-marketplace-vp5dz\" (UID: \"47d66957-13fe-4c90-b512-d8e8e56e5e29\") " pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.208070 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2nmk\" (UniqueName: \"kubernetes.io/projected/47d66957-13fe-4c90-b512-d8e8e56e5e29-kube-api-access-w2nmk\") pod \"redhat-marketplace-vp5dz\" (UID: \"47d66957-13fe-4c90-b512-d8e8e56e5e29\") " pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.308952 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.437059 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rbrwb"] Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.440359 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.467262 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rbrwb"] Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.480234 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qml7t\" (UniqueName: \"kubernetes.io/projected/c64be5bc-6821-4ed8-9155-dcedbfaec076-kube-api-access-qml7t\") pod \"redhat-operators-rbrwb\" (UID: \"c64be5bc-6821-4ed8-9155-dcedbfaec076\") " pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.480273 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64be5bc-6821-4ed8-9155-dcedbfaec076-utilities\") pod \"redhat-operators-rbrwb\" (UID: \"c64be5bc-6821-4ed8-9155-dcedbfaec076\") " pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.480305 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64be5bc-6821-4ed8-9155-dcedbfaec076-catalog-content\") pod \"redhat-operators-rbrwb\" (UID: \"c64be5bc-6821-4ed8-9155-dcedbfaec076\") " pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.485146 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.573947 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj7vv" event={"ID":"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7","Type":"ContainerStarted","Data":"26f26af5b08f474d119ca1b784e655dbaf76e5a8ed034aad44e3f07d68278749"} Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.576475 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" event={"ID":"5369c6f0-a3ea-470c-bda2-abba45b2b4e6","Type":"ContainerStarted","Data":"eccdb26ac9df28e4acb94942b08bbabe95aa2be2698814184745510a0d01d17b"} Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.581794 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qml7t\" (UniqueName: \"kubernetes.io/projected/c64be5bc-6821-4ed8-9155-dcedbfaec076-kube-api-access-qml7t\") pod \"redhat-operators-rbrwb\" (UID: \"c64be5bc-6821-4ed8-9155-dcedbfaec076\") " pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.581838 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64be5bc-6821-4ed8-9155-dcedbfaec076-utilities\") pod \"redhat-operators-rbrwb\" (UID: \"c64be5bc-6821-4ed8-9155-dcedbfaec076\") " pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.581882 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64be5bc-6821-4ed8-9155-dcedbfaec076-catalog-content\") pod \"redhat-operators-rbrwb\" (UID: \"c64be5bc-6821-4ed8-9155-dcedbfaec076\") " pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.582814 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64be5bc-6821-4ed8-9155-dcedbfaec076-utilities\") pod \"redhat-operators-rbrwb\" (UID: \"c64be5bc-6821-4ed8-9155-dcedbfaec076\") " pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.582916 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64be5bc-6821-4ed8-9155-dcedbfaec076-catalog-content\") pod \"redhat-operators-rbrwb\" (UID: \"c64be5bc-6821-4ed8-9155-dcedbfaec076\") " pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.601876 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qml7t\" (UniqueName: \"kubernetes.io/projected/c64be5bc-6821-4ed8-9155-dcedbfaec076-kube-api-access-qml7t\") pod \"redhat-operators-rbrwb\" (UID: \"c64be5bc-6821-4ed8-9155-dcedbfaec076\") " pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.610107 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" podStartSLOduration=2.815099472 podStartE2EDuration="3.610081996s" podCreationTimestamp="2025-11-25 07:12:11 +0000 UTC" firstStartedPulling="2025-11-25 07:12:12.308639839 +0000 UTC m=+1506.796871098" lastFinishedPulling="2025-11-25 07:12:13.103622363 +0000 UTC m=+1507.591853622" observedRunningTime="2025-11-25 07:12:14.603900122 +0000 UTC m=+1509.092131381" watchObservedRunningTime="2025-11-25 07:12:14.610081996 +0000 UTC m=+1509.098313255" Nov 25 07:12:14 crc kubenswrapper[4482]: I1125 07:12:14.758760 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:15 crc kubenswrapper[4482]: I1125 07:12:15.108505 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 07:12:15 crc kubenswrapper[4482]: I1125 07:12:15.150995 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vp5dz"] Nov 25 07:12:15 crc kubenswrapper[4482]: W1125 07:12:15.155710 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47d66957_13fe_4c90_b512_d8e8e56e5e29.slice/crio-c4b4a812ea1972743337986c00aa12e76cd7e3fa834d8b99acb3a0aaae0a00e5 WatchSource:0}: Error finding container c4b4a812ea1972743337986c00aa12e76cd7e3fa834d8b99acb3a0aaae0a00e5: Status 404 returned error can't find the container with id c4b4a812ea1972743337986c00aa12e76cd7e3fa834d8b99acb3a0aaae0a00e5 Nov 25 07:12:15 crc kubenswrapper[4482]: I1125 07:12:15.333053 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rbrwb"] Nov 25 07:12:15 crc kubenswrapper[4482]: W1125 07:12:15.360660 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc64be5bc_6821_4ed8_9155_dcedbfaec076.slice/crio-31a242bc029f3b5cc8ba728dce1f1596e64e3bd7d0e8b299a836d400272222d0 WatchSource:0}: Error finding container 31a242bc029f3b5cc8ba728dce1f1596e64e3bd7d0e8b299a836d400272222d0: Status 404 returned error can't find the container with id 31a242bc029f3b5cc8ba728dce1f1596e64e3bd7d0e8b299a836d400272222d0 Nov 25 07:12:15 crc kubenswrapper[4482]: I1125 07:12:15.588663 4482 generic.go:334] "Generic (PLEG): container finished" podID="c64be5bc-6821-4ed8-9155-dcedbfaec076" containerID="37919b746029886f5f9f9b337fdf8d12b682f39da187e3849085917d1f59da45" exitCode=0 Nov 25 07:12:15 crc kubenswrapper[4482]: I1125 07:12:15.588756 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbrwb" event={"ID":"c64be5bc-6821-4ed8-9155-dcedbfaec076","Type":"ContainerDied","Data":"37919b746029886f5f9f9b337fdf8d12b682f39da187e3849085917d1f59da45"} Nov 25 07:12:15 crc kubenswrapper[4482]: I1125 07:12:15.588787 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbrwb" event={"ID":"c64be5bc-6821-4ed8-9155-dcedbfaec076","Type":"ContainerStarted","Data":"31a242bc029f3b5cc8ba728dce1f1596e64e3bd7d0e8b299a836d400272222d0"} Nov 25 07:12:15 crc kubenswrapper[4482]: I1125 07:12:15.591226 4482 generic.go:334] "Generic (PLEG): container finished" podID="47d66957-13fe-4c90-b512-d8e8e56e5e29" containerID="a91e448e082687944f11be0f87bfb503392b7892078d7bb801e716a31dd554da" exitCode=0 Nov 25 07:12:15 crc kubenswrapper[4482]: I1125 07:12:15.591285 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp5dz" event={"ID":"47d66957-13fe-4c90-b512-d8e8e56e5e29","Type":"ContainerDied","Data":"a91e448e082687944f11be0f87bfb503392b7892078d7bb801e716a31dd554da"} Nov 25 07:12:15 crc kubenswrapper[4482]: I1125 07:12:15.591310 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp5dz" event={"ID":"47d66957-13fe-4c90-b512-d8e8e56e5e29","Type":"ContainerStarted","Data":"c4b4a812ea1972743337986c00aa12e76cd7e3fa834d8b99acb3a0aaae0a00e5"} Nov 25 07:12:15 crc kubenswrapper[4482]: I1125 07:12:15.596126 4482 generic.go:334] "Generic (PLEG): container finished" podID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" containerID="26f26af5b08f474d119ca1b784e655dbaf76e5a8ed034aad44e3f07d68278749" exitCode=0 Nov 25 07:12:15 crc kubenswrapper[4482]: I1125 07:12:15.596439 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj7vv" event={"ID":"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7","Type":"ContainerDied","Data":"26f26af5b08f474d119ca1b784e655dbaf76e5a8ed034aad44e3f07d68278749"} Nov 25 07:12:16 crc kubenswrapper[4482]: I1125 07:12:16.102399 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-v527q"] Nov 25 07:12:16 crc kubenswrapper[4482]: I1125 07:12:16.108015 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-v527q"] Nov 25 07:12:16 crc kubenswrapper[4482]: I1125 07:12:16.613483 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp5dz" event={"ID":"47d66957-13fe-4c90-b512-d8e8e56e5e29","Type":"ContainerStarted","Data":"91114f21376d1fc22e801a198ac41b58bfdedf695c2e88ae05d134e849f47fdc"} Nov 25 07:12:16 crc kubenswrapper[4482]: I1125 07:12:16.619807 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj7vv" event={"ID":"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7","Type":"ContainerStarted","Data":"b308e6b69a893dd4eb099274b910226855b3fd7a6454936464d8a5b56908738f"} Nov 25 07:12:16 crc kubenswrapper[4482]: I1125 07:12:16.700153 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dj7vv" podStartSLOduration=2.0510382639999998 podStartE2EDuration="4.700134275s" podCreationTimestamp="2025-11-25 07:12:12 +0000 UTC" firstStartedPulling="2025-11-25 07:12:13.558477685 +0000 UTC m=+1508.046708944" lastFinishedPulling="2025-11-25 07:12:16.207573696 +0000 UTC m=+1510.695804955" observedRunningTime="2025-11-25 07:12:16.699288962 +0000 UTC m=+1511.187520241" watchObservedRunningTime="2025-11-25 07:12:16.700134275 +0000 UTC m=+1511.188365534" Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.042982 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-eb6b-account-create-nmg2j"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.055843 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-849d-account-create-s6d2f"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.062922 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-eb6b-account-create-nmg2j"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.068666 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-w6572"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.074052 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c8ca-account-create-s9xf9"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.079309 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-849d-account-create-s6d2f"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.085270 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-w6572"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.090986 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c8ca-account-create-s9xf9"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.109388 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-c5mm4"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.114644 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d451-account-create-mjmt4"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.127258 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-5sj86"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.142224 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d451-account-create-mjmt4"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.142280 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-5sj86"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.146743 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-c5mm4"] Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.629951 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbrwb" event={"ID":"c64be5bc-6821-4ed8-9155-dcedbfaec076","Type":"ContainerStarted","Data":"158141583c140affa4ad273cf384153866107387333cdbe836ac9fda21c86b7b"} Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.632408 4482 generic.go:334] "Generic (PLEG): container finished" podID="47d66957-13fe-4c90-b512-d8e8e56e5e29" containerID="91114f21376d1fc22e801a198ac41b58bfdedf695c2e88ae05d134e849f47fdc" exitCode=0 Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.634134 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp5dz" event={"ID":"47d66957-13fe-4c90-b512-d8e8e56e5e29","Type":"ContainerDied","Data":"91114f21376d1fc22e801a198ac41b58bfdedf695c2e88ae05d134e849f47fdc"} Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.866461 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0de43686-0d8e-4474-befd-ca1bdefb961d" path="/var/lib/kubelet/pods/0de43686-0d8e-4474-befd-ca1bdefb961d/volumes" Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.875008 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c01d69-7aa7-49af-99f5-465fafbbc191" path="/var/lib/kubelet/pods/35c01d69-7aa7-49af-99f5-465fafbbc191/volumes" Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.887152 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d" path="/var/lib/kubelet/pods/435df5a1-571b-4cc4-b1ac-4e3cfc9dba9d/volumes" Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.890645 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="479dc11c-3d7f-46f3-a7a4-ea663237c8af" path="/var/lib/kubelet/pods/479dc11c-3d7f-46f3-a7a4-ea663237c8af/volumes" Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.893400 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4804a1ca-dd11-42f7-913d-4b3c1bdb7ead" path="/var/lib/kubelet/pods/4804a1ca-dd11-42f7-913d-4b3c1bdb7ead/volumes" Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.896853 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab09a06a-9cbb-420a-b456-1aa12e0bd0e2" path="/var/lib/kubelet/pods/ab09a06a-9cbb-420a-b456-1aa12e0bd0e2/volumes" Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.900075 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbcc64ec-1a64-403b-be72-d33bb30e5385" path="/var/lib/kubelet/pods/cbcc64ec-1a64-403b-be72-d33bb30e5385/volumes" Nov 25 07:12:17 crc kubenswrapper[4482]: I1125 07:12:17.902673 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd55de78-9d5c-46fa-9289-2ab8dbe482ad" path="/var/lib/kubelet/pods/fd55de78-9d5c-46fa-9289-2ab8dbe482ad/volumes" Nov 25 07:12:18 crc kubenswrapper[4482]: I1125 07:12:18.653312 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp5dz" event={"ID":"47d66957-13fe-4c90-b512-d8e8e56e5e29","Type":"ContainerStarted","Data":"9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e"} Nov 25 07:12:18 crc kubenswrapper[4482]: I1125 07:12:18.673880 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vp5dz" podStartSLOduration=3.062892133 podStartE2EDuration="5.673868766s" podCreationTimestamp="2025-11-25 07:12:13 +0000 UTC" firstStartedPulling="2025-11-25 07:12:15.592235691 +0000 UTC m=+1510.080466939" lastFinishedPulling="2025-11-25 07:12:18.203212314 +0000 UTC m=+1512.691443572" observedRunningTime="2025-11-25 07:12:18.668000382 +0000 UTC m=+1513.156231641" watchObservedRunningTime="2025-11-25 07:12:18.673868766 +0000 UTC m=+1513.162100015" Nov 25 07:12:19 crc kubenswrapper[4482]: I1125 07:12:19.671197 4482 generic.go:334] "Generic (PLEG): container finished" podID="c64be5bc-6821-4ed8-9155-dcedbfaec076" containerID="158141583c140affa4ad273cf384153866107387333cdbe836ac9fda21c86b7b" exitCode=0 Nov 25 07:12:19 crc kubenswrapper[4482]: I1125 07:12:19.671285 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbrwb" event={"ID":"c64be5bc-6821-4ed8-9155-dcedbfaec076","Type":"ContainerDied","Data":"158141583c140affa4ad273cf384153866107387333cdbe836ac9fda21c86b7b"} Nov 25 07:12:20 crc kubenswrapper[4482]: I1125 07:12:20.683769 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbrwb" event={"ID":"c64be5bc-6821-4ed8-9155-dcedbfaec076","Type":"ContainerStarted","Data":"0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603"} Nov 25 07:12:20 crc kubenswrapper[4482]: I1125 07:12:20.713182 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rbrwb" podStartSLOduration=2.067265218 podStartE2EDuration="6.713147439s" podCreationTimestamp="2025-11-25 07:12:14 +0000 UTC" firstStartedPulling="2025-11-25 07:12:15.590655582 +0000 UTC m=+1510.078886840" lastFinishedPulling="2025-11-25 07:12:20.236537803 +0000 UTC m=+1514.724769061" observedRunningTime="2025-11-25 07:12:20.700352386 +0000 UTC m=+1515.188583645" watchObservedRunningTime="2025-11-25 07:12:20.713147439 +0000 UTC m=+1515.201378698" Nov 25 07:12:22 crc kubenswrapper[4482]: I1125 07:12:22.371146 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:22 crc kubenswrapper[4482]: I1125 07:12:22.371434 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:23 crc kubenswrapper[4482]: I1125 07:12:23.407803 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dj7vv" podUID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" containerName="registry-server" probeResult="failure" output=< Nov 25 07:12:23 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 07:12:23 crc kubenswrapper[4482]: > Nov 25 07:12:24 crc kubenswrapper[4482]: I1125 07:12:24.486119 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:24 crc kubenswrapper[4482]: I1125 07:12:24.486486 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:24 crc kubenswrapper[4482]: I1125 07:12:24.530447 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:24 crc kubenswrapper[4482]: I1125 07:12:24.758822 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:12:24 crc kubenswrapper[4482]: I1125 07:12:24.759121 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:24 crc kubenswrapper[4482]: I1125 07:12:24.759151 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:25 crc kubenswrapper[4482]: I1125 07:12:25.801031 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rbrwb" podUID="c64be5bc-6821-4ed8-9155-dcedbfaec076" containerName="registry-server" probeResult="failure" output=< Nov 25 07:12:25 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 07:12:25 crc kubenswrapper[4482]: > Nov 25 07:12:27 crc kubenswrapper[4482]: I1125 07:12:27.169839 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b7b9ccd57-7v896" Nov 25 07:12:32 crc kubenswrapper[4482]: I1125 07:12:32.429758 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:32 crc kubenswrapper[4482]: I1125 07:12:32.485695 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:34 crc kubenswrapper[4482]: I1125 07:12:34.807375 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:34 crc kubenswrapper[4482]: I1125 07:12:34.856122 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.172307 4482 scope.go:117] "RemoveContainer" containerID="68da05c4f90f4c8b0cd68d362275bcdb4253cd80771e13bccc95ea5c0318ab1f" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.211120 4482 scope.go:117] "RemoveContainer" containerID="ff6f4281b862446f33ae43f605e4e3423fe0a1dd108c42fbaacf29274300ab62" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.237975 4482 scope.go:117] "RemoveContainer" containerID="4b18aa953c1b8458a0a0f0a0fff79c0504846ace794a7a0610b1d5db9b8e8a48" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.281391 4482 scope.go:117] "RemoveContainer" containerID="0bee1c445376db5e16c48dd26adea1cd6aa36a61033ef86239a31d624dd6e545" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.334287 4482 scope.go:117] "RemoveContainer" containerID="e4e4656b2f5b2dfb9503d31ac45b87579f1a819c7c11e64ca1946598cd11703f" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.380880 4482 scope.go:117] "RemoveContainer" containerID="65b5914bea150d66430d140c56b8764b9179acef8664592b80979c195012ef15" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.415955 4482 scope.go:117] "RemoveContainer" containerID="b2852e446bd1dbdc835255fdcf70d485fc8d1935bd59836352d4c24d92d2eb4a" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.445021 4482 scope.go:117] "RemoveContainer" containerID="324f9f52ab7a4fc32f1b38bcb0e9ee42a28d5c810bd52862a6b02c65fa70f133" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.470568 4482 scope.go:117] "RemoveContainer" containerID="99938804327d074506ade0be54e949d16e6d9d49671f0e0fd4f9f20caca1b9a7" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.500512 4482 scope.go:117] "RemoveContainer" containerID="95785a714c9130ac63897f82ac42470eefab3926496e55045a301e9fc5f71f2e" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.523352 4482 scope.go:117] "RemoveContainer" containerID="9964604583b93a9fbf942889db321001f39fff096f50da8420490f6b27cb4c5d" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.551803 4482 scope.go:117] "RemoveContainer" containerID="f0890ca3f20afb75dd4d01538548a5f142c752342b6c1d02b5ea3990b1b7ebc0" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.573564 4482 scope.go:117] "RemoveContainer" containerID="ea794e92f85452ae0d911c240dcdf5367b4501e67cb6783becb64cff608c5494" Nov 25 07:12:38 crc kubenswrapper[4482]: I1125 07:12:38.605157 4482 scope.go:117] "RemoveContainer" containerID="3824631206b39c6605fd32c79e27c515c92bd15cded7a2d8667ebf3ddfbb6fb3" Nov 25 07:12:39 crc kubenswrapper[4482]: I1125 07:12:39.117403 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:12:39 crc kubenswrapper[4482]: I1125 07:12:39.118014 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:12:39 crc kubenswrapper[4482]: I1125 07:12:39.118153 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 07:12:39 crc kubenswrapper[4482]: I1125 07:12:39.119233 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:12:39 crc kubenswrapper[4482]: I1125 07:12:39.119387 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" gracePeriod=600 Nov 25 07:12:39 crc kubenswrapper[4482]: E1125 07:12:39.244325 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:12:39 crc kubenswrapper[4482]: I1125 07:12:39.886801 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" exitCode=0 Nov 25 07:12:39 crc kubenswrapper[4482]: I1125 07:12:39.887066 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77"} Nov 25 07:12:39 crc kubenswrapper[4482]: I1125 07:12:39.887102 4482 scope.go:117] "RemoveContainer" containerID="63bdd9f0fce14d34b7bf553de17b7114201d3cbf1828eb48f5089e09d1c6eec0" Nov 25 07:12:39 crc kubenswrapper[4482]: I1125 07:12:39.887678 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:12:39 crc kubenswrapper[4482]: E1125 07:12:39.887899 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:12:40 crc kubenswrapper[4482]: I1125 07:12:40.042413 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-nlsmj"] Nov 25 07:12:40 crc kubenswrapper[4482]: I1125 07:12:40.050035 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-nlsmj"] Nov 25 07:12:41 crc kubenswrapper[4482]: I1125 07:12:41.851148 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d08539-2898-4d05-af16-1dd533f1720d" path="/var/lib/kubelet/pods/a3d08539-2898-4d05-af16-1dd533f1720d/volumes" Nov 25 07:12:53 crc kubenswrapper[4482]: I1125 07:12:53.831560 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:12:53 crc kubenswrapper[4482]: E1125 07:12:53.832573 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:12:55 crc kubenswrapper[4482]: I1125 07:12:55.290004 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dj7vv"] Nov 25 07:12:55 crc kubenswrapper[4482]: I1125 07:12:55.291661 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dj7vv" podUID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" containerName="registry-server" containerID="cri-o://b308e6b69a893dd4eb099274b910226855b3fd7a6454936464d8a5b56908738f" gracePeriod=2 Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.041124 4482 generic.go:334] "Generic (PLEG): container finished" podID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" containerID="b308e6b69a893dd4eb099274b910226855b3fd7a6454936464d8a5b56908738f" exitCode=0 Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.041215 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj7vv" event={"ID":"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7","Type":"ContainerDied","Data":"b308e6b69a893dd4eb099274b910226855b3fd7a6454936464d8a5b56908738f"} Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.041493 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dj7vv" event={"ID":"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7","Type":"ContainerDied","Data":"c97e900058aa4e2af9fb44cffca394f674f2d1c2685def08a07e996f5a2f72f8"} Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.041988 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c97e900058aa4e2af9fb44cffca394f674f2d1c2685def08a07e996f5a2f72f8" Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.078589 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.161758 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-utilities" (OuterVolumeSpecName: "utilities") pod "6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" (UID: "6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.161920 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-utilities\") pod \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\" (UID: \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\") " Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.162042 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hjsg\" (UniqueName: \"kubernetes.io/projected/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-kube-api-access-7hjsg\") pod \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\" (UID: \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\") " Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.162195 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-catalog-content\") pod \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\" (UID: \"6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7\") " Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.163000 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.175644 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-kube-api-access-7hjsg" (OuterVolumeSpecName: "kube-api-access-7hjsg") pod "6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" (UID: "6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7"). InnerVolumeSpecName "kube-api-access-7hjsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.200797 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" (UID: "6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.264404 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hjsg\" (UniqueName: \"kubernetes.io/projected/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-kube-api-access-7hjsg\") on node \"crc\" DevicePath \"\"" Nov 25 07:12:56 crc kubenswrapper[4482]: I1125 07:12:56.264437 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:12:57 crc kubenswrapper[4482]: I1125 07:12:57.031180 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-7b7rr"] Nov 25 07:12:57 crc kubenswrapper[4482]: I1125 07:12:57.038209 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mrd6z"] Nov 25 07:12:57 crc kubenswrapper[4482]: I1125 07:12:57.045346 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-7b7rr"] Nov 25 07:12:57 crc kubenswrapper[4482]: I1125 07:12:57.050243 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dj7vv" Nov 25 07:12:57 crc kubenswrapper[4482]: I1125 07:12:57.050954 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mrd6z"] Nov 25 07:12:57 crc kubenswrapper[4482]: I1125 07:12:57.080979 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dj7vv"] Nov 25 07:12:57 crc kubenswrapper[4482]: I1125 07:12:57.088440 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dj7vv"] Nov 25 07:12:57 crc kubenswrapper[4482]: I1125 07:12:57.843643 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573eba52-c038-42e0-89a7-4791962151a4" path="/var/lib/kubelet/pods/573eba52-c038-42e0-89a7-4791962151a4/volumes" Nov 25 07:12:57 crc kubenswrapper[4482]: I1125 07:12:57.845229 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" path="/var/lib/kubelet/pods/6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7/volumes" Nov 25 07:12:57 crc kubenswrapper[4482]: I1125 07:12:57.846326 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fd67d9d-6ac0-496c-9726-ccb87a383a9a" path="/var/lib/kubelet/pods/8fd67d9d-6ac0-496c-9726-ccb87a383a9a/volumes" Nov 25 07:12:59 crc kubenswrapper[4482]: I1125 07:12:59.490354 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vp5dz"] Nov 25 07:12:59 crc kubenswrapper[4482]: I1125 07:12:59.490918 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vp5dz" podUID="47d66957-13fe-4c90-b512-d8e8e56e5e29" containerName="registry-server" containerID="cri-o://9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e" gracePeriod=2 Nov 25 07:12:59 crc kubenswrapper[4482]: I1125 07:12:59.976886 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.046483 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47d66957-13fe-4c90-b512-d8e8e56e5e29-utilities\") pod \"47d66957-13fe-4c90-b512-d8e8e56e5e29\" (UID: \"47d66957-13fe-4c90-b512-d8e8e56e5e29\") " Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.046592 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2nmk\" (UniqueName: \"kubernetes.io/projected/47d66957-13fe-4c90-b512-d8e8e56e5e29-kube-api-access-w2nmk\") pod \"47d66957-13fe-4c90-b512-d8e8e56e5e29\" (UID: \"47d66957-13fe-4c90-b512-d8e8e56e5e29\") " Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.046767 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47d66957-13fe-4c90-b512-d8e8e56e5e29-catalog-content\") pod \"47d66957-13fe-4c90-b512-d8e8e56e5e29\" (UID: \"47d66957-13fe-4c90-b512-d8e8e56e5e29\") " Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.047292 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47d66957-13fe-4c90-b512-d8e8e56e5e29-utilities" (OuterVolumeSpecName: "utilities") pod "47d66957-13fe-4c90-b512-d8e8e56e5e29" (UID: "47d66957-13fe-4c90-b512-d8e8e56e5e29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.052559 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d66957-13fe-4c90-b512-d8e8e56e5e29-kube-api-access-w2nmk" (OuterVolumeSpecName: "kube-api-access-w2nmk") pod "47d66957-13fe-4c90-b512-d8e8e56e5e29" (UID: "47d66957-13fe-4c90-b512-d8e8e56e5e29"). InnerVolumeSpecName "kube-api-access-w2nmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.057828 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47d66957-13fe-4c90-b512-d8e8e56e5e29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47d66957-13fe-4c90-b512-d8e8e56e5e29" (UID: "47d66957-13fe-4c90-b512-d8e8e56e5e29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.092615 4482 generic.go:334] "Generic (PLEG): container finished" podID="47d66957-13fe-4c90-b512-d8e8e56e5e29" containerID="9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e" exitCode=0 Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.092659 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp5dz" event={"ID":"47d66957-13fe-4c90-b512-d8e8e56e5e29","Type":"ContainerDied","Data":"9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e"} Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.092690 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp5dz" event={"ID":"47d66957-13fe-4c90-b512-d8e8e56e5e29","Type":"ContainerDied","Data":"c4b4a812ea1972743337986c00aa12e76cd7e3fa834d8b99acb3a0aaae0a00e5"} Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.092681 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vp5dz" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.092704 4482 scope.go:117] "RemoveContainer" containerID="9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.098975 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rbrwb"] Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.099195 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rbrwb" podUID="c64be5bc-6821-4ed8-9155-dcedbfaec076" containerName="registry-server" containerID="cri-o://0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603" gracePeriod=2 Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.144027 4482 scope.go:117] "RemoveContainer" containerID="91114f21376d1fc22e801a198ac41b58bfdedf695c2e88ae05d134e849f47fdc" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.149161 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47d66957-13fe-4c90-b512-d8e8e56e5e29-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.149207 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2nmk\" (UniqueName: \"kubernetes.io/projected/47d66957-13fe-4c90-b512-d8e8e56e5e29-kube-api-access-w2nmk\") on node \"crc\" DevicePath \"\"" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.149219 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47d66957-13fe-4c90-b512-d8e8e56e5e29-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.150134 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vp5dz"] Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.164664 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vp5dz"] Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.172299 4482 scope.go:117] "RemoveContainer" containerID="a91e448e082687944f11be0f87bfb503392b7892078d7bb801e716a31dd554da" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.266641 4482 scope.go:117] "RemoveContainer" containerID="9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e" Nov 25 07:13:00 crc kubenswrapper[4482]: E1125 07:13:00.270341 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e\": container with ID starting with 9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e not found: ID does not exist" containerID="9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.270385 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e"} err="failed to get container status \"9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e\": rpc error: code = NotFound desc = could not find container \"9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e\": container with ID starting with 9f0eafd3311cd63c95323e07ca80b3381335c10e0555e6681794602258402e8e not found: ID does not exist" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.270414 4482 scope.go:117] "RemoveContainer" containerID="91114f21376d1fc22e801a198ac41b58bfdedf695c2e88ae05d134e849f47fdc" Nov 25 07:13:00 crc kubenswrapper[4482]: E1125 07:13:00.271038 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91114f21376d1fc22e801a198ac41b58bfdedf695c2e88ae05d134e849f47fdc\": container with ID starting with 91114f21376d1fc22e801a198ac41b58bfdedf695c2e88ae05d134e849f47fdc not found: ID does not exist" containerID="91114f21376d1fc22e801a198ac41b58bfdedf695c2e88ae05d134e849f47fdc" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.271094 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91114f21376d1fc22e801a198ac41b58bfdedf695c2e88ae05d134e849f47fdc"} err="failed to get container status \"91114f21376d1fc22e801a198ac41b58bfdedf695c2e88ae05d134e849f47fdc\": rpc error: code = NotFound desc = could not find container \"91114f21376d1fc22e801a198ac41b58bfdedf695c2e88ae05d134e849f47fdc\": container with ID starting with 91114f21376d1fc22e801a198ac41b58bfdedf695c2e88ae05d134e849f47fdc not found: ID does not exist" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.271122 4482 scope.go:117] "RemoveContainer" containerID="a91e448e082687944f11be0f87bfb503392b7892078d7bb801e716a31dd554da" Nov 25 07:13:00 crc kubenswrapper[4482]: E1125 07:13:00.271492 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a91e448e082687944f11be0f87bfb503392b7892078d7bb801e716a31dd554da\": container with ID starting with a91e448e082687944f11be0f87bfb503392b7892078d7bb801e716a31dd554da not found: ID does not exist" containerID="a91e448e082687944f11be0f87bfb503392b7892078d7bb801e716a31dd554da" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.271524 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a91e448e082687944f11be0f87bfb503392b7892078d7bb801e716a31dd554da"} err="failed to get container status \"a91e448e082687944f11be0f87bfb503392b7892078d7bb801e716a31dd554da\": rpc error: code = NotFound desc = could not find container \"a91e448e082687944f11be0f87bfb503392b7892078d7bb801e716a31dd554da\": container with ID starting with a91e448e082687944f11be0f87bfb503392b7892078d7bb801e716a31dd554da not found: ID does not exist" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.457387 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.555028 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qml7t\" (UniqueName: \"kubernetes.io/projected/c64be5bc-6821-4ed8-9155-dcedbfaec076-kube-api-access-qml7t\") pod \"c64be5bc-6821-4ed8-9155-dcedbfaec076\" (UID: \"c64be5bc-6821-4ed8-9155-dcedbfaec076\") " Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.555085 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64be5bc-6821-4ed8-9155-dcedbfaec076-utilities\") pod \"c64be5bc-6821-4ed8-9155-dcedbfaec076\" (UID: \"c64be5bc-6821-4ed8-9155-dcedbfaec076\") " Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.555103 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64be5bc-6821-4ed8-9155-dcedbfaec076-catalog-content\") pod \"c64be5bc-6821-4ed8-9155-dcedbfaec076\" (UID: \"c64be5bc-6821-4ed8-9155-dcedbfaec076\") " Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.556554 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c64be5bc-6821-4ed8-9155-dcedbfaec076-utilities" (OuterVolumeSpecName: "utilities") pod "c64be5bc-6821-4ed8-9155-dcedbfaec076" (UID: "c64be5bc-6821-4ed8-9155-dcedbfaec076"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.560218 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c64be5bc-6821-4ed8-9155-dcedbfaec076-kube-api-access-qml7t" (OuterVolumeSpecName: "kube-api-access-qml7t") pod "c64be5bc-6821-4ed8-9155-dcedbfaec076" (UID: "c64be5bc-6821-4ed8-9155-dcedbfaec076"). InnerVolumeSpecName "kube-api-access-qml7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.615958 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c64be5bc-6821-4ed8-9155-dcedbfaec076-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c64be5bc-6821-4ed8-9155-dcedbfaec076" (UID: "c64be5bc-6821-4ed8-9155-dcedbfaec076"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.658097 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qml7t\" (UniqueName: \"kubernetes.io/projected/c64be5bc-6821-4ed8-9155-dcedbfaec076-kube-api-access-qml7t\") on node \"crc\" DevicePath \"\"" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.658140 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64be5bc-6821-4ed8-9155-dcedbfaec076-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:13:00 crc kubenswrapper[4482]: I1125 07:13:00.658154 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64be5bc-6821-4ed8-9155-dcedbfaec076-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.037540 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-nsg2v"] Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.046900 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-nsg2v"] Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.103899 4482 generic.go:334] "Generic (PLEG): container finished" podID="c64be5bc-6821-4ed8-9155-dcedbfaec076" containerID="0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603" exitCode=0 Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.103957 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rbrwb" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.104010 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbrwb" event={"ID":"c64be5bc-6821-4ed8-9155-dcedbfaec076","Type":"ContainerDied","Data":"0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603"} Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.104037 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rbrwb" event={"ID":"c64be5bc-6821-4ed8-9155-dcedbfaec076","Type":"ContainerDied","Data":"31a242bc029f3b5cc8ba728dce1f1596e64e3bd7d0e8b299a836d400272222d0"} Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.104070 4482 scope.go:117] "RemoveContainer" containerID="0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.123365 4482 scope.go:117] "RemoveContainer" containerID="158141583c140affa4ad273cf384153866107387333cdbe836ac9fda21c86b7b" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.130647 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rbrwb"] Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.144293 4482 scope.go:117] "RemoveContainer" containerID="37919b746029886f5f9f9b337fdf8d12b682f39da187e3849085917d1f59da45" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.146708 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rbrwb"] Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.192344 4482 scope.go:117] "RemoveContainer" containerID="0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603" Nov 25 07:13:01 crc kubenswrapper[4482]: E1125 07:13:01.192754 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603\": container with ID starting with 0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603 not found: ID does not exist" containerID="0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.192801 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603"} err="failed to get container status \"0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603\": rpc error: code = NotFound desc = could not find container \"0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603\": container with ID starting with 0c1681082467d374faa607583e95edc04c3863b50f5b8d4d8fe6e96231e61603 not found: ID does not exist" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.192833 4482 scope.go:117] "RemoveContainer" containerID="158141583c140affa4ad273cf384153866107387333cdbe836ac9fda21c86b7b" Nov 25 07:13:01 crc kubenswrapper[4482]: E1125 07:13:01.193381 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"158141583c140affa4ad273cf384153866107387333cdbe836ac9fda21c86b7b\": container with ID starting with 158141583c140affa4ad273cf384153866107387333cdbe836ac9fda21c86b7b not found: ID does not exist" containerID="158141583c140affa4ad273cf384153866107387333cdbe836ac9fda21c86b7b" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.193518 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158141583c140affa4ad273cf384153866107387333cdbe836ac9fda21c86b7b"} err="failed to get container status \"158141583c140affa4ad273cf384153866107387333cdbe836ac9fda21c86b7b\": rpc error: code = NotFound desc = could not find container \"158141583c140affa4ad273cf384153866107387333cdbe836ac9fda21c86b7b\": container with ID starting with 158141583c140affa4ad273cf384153866107387333cdbe836ac9fda21c86b7b not found: ID does not exist" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.193617 4482 scope.go:117] "RemoveContainer" containerID="37919b746029886f5f9f9b337fdf8d12b682f39da187e3849085917d1f59da45" Nov 25 07:13:01 crc kubenswrapper[4482]: E1125 07:13:01.194126 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37919b746029886f5f9f9b337fdf8d12b682f39da187e3849085917d1f59da45\": container with ID starting with 37919b746029886f5f9f9b337fdf8d12b682f39da187e3849085917d1f59da45 not found: ID does not exist" containerID="37919b746029886f5f9f9b337fdf8d12b682f39da187e3849085917d1f59da45" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.194160 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37919b746029886f5f9f9b337fdf8d12b682f39da187e3849085917d1f59da45"} err="failed to get container status \"37919b746029886f5f9f9b337fdf8d12b682f39da187e3849085917d1f59da45\": rpc error: code = NotFound desc = could not find container \"37919b746029886f5f9f9b337fdf8d12b682f39da187e3849085917d1f59da45\": container with ID starting with 37919b746029886f5f9f9b337fdf8d12b682f39da187e3849085917d1f59da45 not found: ID does not exist" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.841411 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11533631-6479-4f8b-baaf-b1c71de4a966" path="/var/lib/kubelet/pods/11533631-6479-4f8b-baaf-b1c71de4a966/volumes" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.842845 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47d66957-13fe-4c90-b512-d8e8e56e5e29" path="/var/lib/kubelet/pods/47d66957-13fe-4c90-b512-d8e8e56e5e29/volumes" Nov 25 07:13:01 crc kubenswrapper[4482]: I1125 07:13:01.843396 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c64be5bc-6821-4ed8-9155-dcedbfaec076" path="/var/lib/kubelet/pods/c64be5bc-6821-4ed8-9155-dcedbfaec076/volumes" Nov 25 07:13:06 crc kubenswrapper[4482]: I1125 07:13:06.022981 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qm4lm"] Nov 25 07:13:06 crc kubenswrapper[4482]: I1125 07:13:06.031090 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qm4lm"] Nov 25 07:13:06 crc kubenswrapper[4482]: I1125 07:13:06.831458 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:13:06 crc kubenswrapper[4482]: E1125 07:13:06.831902 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:13:07 crc kubenswrapper[4482]: I1125 07:13:07.841020 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f" path="/var/lib/kubelet/pods/1e19cf1f-9ab4-4ac3-a735-0ae4252ac46f/volumes" Nov 25 07:13:18 crc kubenswrapper[4482]: I1125 07:13:18.831147 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:13:18 crc kubenswrapper[4482]: E1125 07:13:18.831677 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:13:31 crc kubenswrapper[4482]: I1125 07:13:31.831289 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:13:31 crc kubenswrapper[4482]: E1125 07:13:31.832152 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:13:38 crc kubenswrapper[4482]: I1125 07:13:38.931941 4482 scope.go:117] "RemoveContainer" containerID="bf9624616701ab1c4e4f88c5ff72594fc0c04a3b485b12aab244e8d50c4d9407" Nov 25 07:13:38 crc kubenswrapper[4482]: I1125 07:13:38.958002 4482 scope.go:117] "RemoveContainer" containerID="44e744e7354094966911ff43826e1f22fc2d929d601bfc29686371979810cb41" Nov 25 07:13:38 crc kubenswrapper[4482]: I1125 07:13:38.996871 4482 scope.go:117] "RemoveContainer" containerID="fbf73235398a41b20075dd023a863d16de2c876a88c45c26e0f0249a327ebe45" Nov 25 07:13:39 crc kubenswrapper[4482]: I1125 07:13:39.027291 4482 scope.go:117] "RemoveContainer" containerID="22df4d6578c18583c058a7a90fcceb72256ebc798a36408a01ff1c222e2d44ae" Nov 25 07:13:39 crc kubenswrapper[4482]: I1125 07:13:39.054688 4482 scope.go:117] "RemoveContainer" containerID="7caca70f49e5acd7a27569de6c3729ad30f554a367594838e2cb7e93f9f3dc80" Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.058935 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-383a-account-create-6z5m7"] Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.079713 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e8cc-account-create-hf9xd"] Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.086549 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-83e4-account-create-r6m84"] Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.094672 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-383a-account-create-6z5m7"] Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.100861 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-fvmzq"] Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.106319 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-83e4-account-create-r6m84"] Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.111248 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-tgdj7"] Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.116569 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e8cc-account-create-hf9xd"] Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.121690 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jcvz8"] Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.126921 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-tgdj7"] Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.134853 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jcvz8"] Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.140633 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-fvmzq"] Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.830584 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:13:43 crc kubenswrapper[4482]: E1125 07:13:43.830811 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.839217 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3410b4-a318-4018-85b3-1447b61ae0e5" path="/var/lib/kubelet/pods/1d3410b4-a318-4018-85b3-1447b61ae0e5/volumes" Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.839803 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="754089e8-09b2-44ad-bdf7-ac4bb4871f3b" path="/var/lib/kubelet/pods/754089e8-09b2-44ad-bdf7-ac4bb4871f3b/volumes" Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.840740 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82aeaff-100d-45a9-9694-aae65838cf91" path="/var/lib/kubelet/pods/b82aeaff-100d-45a9-9694-aae65838cf91/volumes" Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.841567 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae725f0-8063-4795-bbee-c00ee44a38b8" path="/var/lib/kubelet/pods/cae725f0-8063-4795-bbee-c00ee44a38b8/volumes" Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.842555 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4820888-8372-4ac2-b8bd-f6d5f1f64770" path="/var/lib/kubelet/pods/d4820888-8372-4ac2-b8bd-f6d5f1f64770/volumes" Nov 25 07:13:43 crc kubenswrapper[4482]: I1125 07:13:43.843067 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e617f2ae-a16c-405e-b79a-5331a8884588" path="/var/lib/kubelet/pods/e617f2ae-a16c-405e-b79a-5331a8884588/volumes" Nov 25 07:13:58 crc kubenswrapper[4482]: I1125 07:13:58.830994 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:13:58 crc kubenswrapper[4482]: E1125 07:13:58.831648 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:13:59 crc kubenswrapper[4482]: I1125 07:13:59.028782 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-v2dqt"] Nov 25 07:13:59 crc kubenswrapper[4482]: I1125 07:13:59.035300 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-v2dqt"] Nov 25 07:13:59 crc kubenswrapper[4482]: I1125 07:13:59.840201 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e50321d-a59a-4d39-a485-4299ced13bdc" path="/var/lib/kubelet/pods/3e50321d-a59a-4d39-a485-4299ced13bdc/volumes" Nov 25 07:14:09 crc kubenswrapper[4482]: I1125 07:14:09.831149 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:14:09 crc kubenswrapper[4482]: E1125 07:14:09.831882 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:14:10 crc kubenswrapper[4482]: I1125 07:14:10.032265 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-z8dgz"] Nov 25 07:14:10 crc kubenswrapper[4482]: I1125 07:14:10.040132 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-z8dgz"] Nov 25 07:14:11 crc kubenswrapper[4482]: I1125 07:14:11.845453 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d25c491-a613-4f52-8cb8-95d689bc3000" path="/var/lib/kubelet/pods/6d25c491-a613-4f52-8cb8-95d689bc3000/volumes" Nov 25 07:14:22 crc kubenswrapper[4482]: I1125 07:14:22.831158 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:14:22 crc kubenswrapper[4482]: E1125 07:14:22.831816 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:14:36 crc kubenswrapper[4482]: I1125 07:14:36.831377 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:14:36 crc kubenswrapper[4482]: E1125 07:14:36.831994 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:14:39 crc kubenswrapper[4482]: I1125 07:14:39.187616 4482 scope.go:117] "RemoveContainer" containerID="ecfb935954d7f35df84fb2d75a72c26d682dbb01556014ff0bb4fc0c078e5b54" Nov 25 07:14:39 crc kubenswrapper[4482]: I1125 07:14:39.208122 4482 scope.go:117] "RemoveContainer" containerID="99022cbfd793bfa719f0c1456d3ae613406fde7cf1e69c24d5ca9bccaec27df7" Nov 25 07:14:39 crc kubenswrapper[4482]: I1125 07:14:39.249045 4482 scope.go:117] "RemoveContainer" containerID="04f6ff398a11bfd652274cebdd4ffdf94adc2c7d0c955e6fad0b0ad02da6d9f4" Nov 25 07:14:39 crc kubenswrapper[4482]: I1125 07:14:39.294630 4482 scope.go:117] "RemoveContainer" containerID="3084593537a769be11003e4b88b0d06a1b8d11262a8479edbeedc721630daba5" Nov 25 07:14:39 crc kubenswrapper[4482]: I1125 07:14:39.325379 4482 scope.go:117] "RemoveContainer" containerID="04077ac8ba14602fb15a0e04ff6d652d77f49c8803782427758af7b08b69a4a7" Nov 25 07:14:39 crc kubenswrapper[4482]: I1125 07:14:39.378665 4482 scope.go:117] "RemoveContainer" containerID="600c24fde43f2c0c7db39eb1dc22497621eed85903b4c1e91be358b9aa5ce530" Nov 25 07:14:39 crc kubenswrapper[4482]: I1125 07:14:39.395344 4482 scope.go:117] "RemoveContainer" containerID="f09432ab721ccc06512c1952caab538dd6b44d2f9db2c84dc8963627e0347838" Nov 25 07:14:39 crc kubenswrapper[4482]: I1125 07:14:39.412145 4482 scope.go:117] "RemoveContainer" containerID="3c0a229f14073f5031de207333fcb4f1c7c0a21bc2d23910985df8156c18fa4e" Nov 25 07:14:49 crc kubenswrapper[4482]: I1125 07:14:49.040449 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-ggvxs"] Nov 25 07:14:49 crc kubenswrapper[4482]: I1125 07:14:49.046980 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-ggvxs"] Nov 25 07:14:49 crc kubenswrapper[4482]: I1125 07:14:49.838791 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f1385f6-5258-4372-a20a-30a7229ec2e8" path="/var/lib/kubelet/pods/6f1385f6-5258-4372-a20a-30a7229ec2e8/volumes" Nov 25 07:14:50 crc kubenswrapper[4482]: I1125 07:14:50.021451 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cfr4t"] Nov 25 07:14:50 crc kubenswrapper[4482]: I1125 07:14:50.029466 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cfr4t"] Nov 25 07:14:51 crc kubenswrapper[4482]: I1125 07:14:51.830423 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:14:51 crc kubenswrapper[4482]: E1125 07:14:51.830744 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:14:51 crc kubenswrapper[4482]: I1125 07:14:51.840219 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda0ef98-7b63-4531-8655-a537323394a7" path="/var/lib/kubelet/pods/cda0ef98-7b63-4531-8655-a537323394a7/volumes" Nov 25 07:14:55 crc kubenswrapper[4482]: I1125 07:14:55.986888 4482 generic.go:334] "Generic (PLEG): container finished" podID="5369c6f0-a3ea-470c-bda2-abba45b2b4e6" containerID="eccdb26ac9df28e4acb94942b08bbabe95aa2be2698814184745510a0d01d17b" exitCode=0 Nov 25 07:14:55 crc kubenswrapper[4482]: I1125 07:14:55.986963 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" event={"ID":"5369c6f0-a3ea-470c-bda2-abba45b2b4e6","Type":"ContainerDied","Data":"eccdb26ac9df28e4acb94942b08bbabe95aa2be2698814184745510a0d01d17b"} Nov 25 07:14:57 crc kubenswrapper[4482]: I1125 07:14:57.429267 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:14:57 crc kubenswrapper[4482]: I1125 07:14:57.509696 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-ssh-key\") pod \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\" (UID: \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\") " Nov 25 07:14:57 crc kubenswrapper[4482]: I1125 07:14:57.509753 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-inventory\") pod \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\" (UID: \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\") " Nov 25 07:14:57 crc kubenswrapper[4482]: I1125 07:14:57.509837 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98gmc\" (UniqueName: \"kubernetes.io/projected/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-kube-api-access-98gmc\") pod \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\" (UID: \"5369c6f0-a3ea-470c-bda2-abba45b2b4e6\") " Nov 25 07:14:57 crc kubenswrapper[4482]: I1125 07:14:57.515433 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-kube-api-access-98gmc" (OuterVolumeSpecName: "kube-api-access-98gmc") pod "5369c6f0-a3ea-470c-bda2-abba45b2b4e6" (UID: "5369c6f0-a3ea-470c-bda2-abba45b2b4e6"). InnerVolumeSpecName "kube-api-access-98gmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:14:57 crc kubenswrapper[4482]: I1125 07:14:57.531575 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5369c6f0-a3ea-470c-bda2-abba45b2b4e6" (UID: "5369c6f0-a3ea-470c-bda2-abba45b2b4e6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:14:57 crc kubenswrapper[4482]: I1125 07:14:57.532361 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-inventory" (OuterVolumeSpecName: "inventory") pod "5369c6f0-a3ea-470c-bda2-abba45b2b4e6" (UID: "5369c6f0-a3ea-470c-bda2-abba45b2b4e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:14:57 crc kubenswrapper[4482]: I1125 07:14:57.611873 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98gmc\" (UniqueName: \"kubernetes.io/projected/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-kube-api-access-98gmc\") on node \"crc\" DevicePath \"\"" Nov 25 07:14:57 crc kubenswrapper[4482]: I1125 07:14:57.611900 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:14:57 crc kubenswrapper[4482]: I1125 07:14:57.611910 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5369c6f0-a3ea-470c-bda2-abba45b2b4e6-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.003962 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" event={"ID":"5369c6f0-a3ea-470c-bda2-abba45b2b4e6","Type":"ContainerDied","Data":"27b7e7fe4449ab2e2e2373764cebed43f78dc3e51010076e3851024a030b6f0e"} Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.003997 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27b7e7fe4449ab2e2e2373764cebed43f78dc3e51010076e3851024a030b6f0e" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.004134 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjqcq" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.078857 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr"] Nov 25 07:14:58 crc kubenswrapper[4482]: E1125 07:14:58.079253 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d66957-13fe-4c90-b512-d8e8e56e5e29" containerName="extract-content" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079271 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d66957-13fe-4c90-b512-d8e8e56e5e29" containerName="extract-content" Nov 25 07:14:58 crc kubenswrapper[4482]: E1125 07:14:58.079308 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" containerName="extract-content" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079314 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" containerName="extract-content" Nov 25 07:14:58 crc kubenswrapper[4482]: E1125 07:14:58.079327 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64be5bc-6821-4ed8-9155-dcedbfaec076" containerName="extract-utilities" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079333 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64be5bc-6821-4ed8-9155-dcedbfaec076" containerName="extract-utilities" Nov 25 07:14:58 crc kubenswrapper[4482]: E1125 07:14:58.079342 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5369c6f0-a3ea-470c-bda2-abba45b2b4e6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079348 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="5369c6f0-a3ea-470c-bda2-abba45b2b4e6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 07:14:58 crc kubenswrapper[4482]: E1125 07:14:58.079361 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d66957-13fe-4c90-b512-d8e8e56e5e29" containerName="registry-server" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079367 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d66957-13fe-4c90-b512-d8e8e56e5e29" containerName="registry-server" Nov 25 07:14:58 crc kubenswrapper[4482]: E1125 07:14:58.079375 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" containerName="extract-utilities" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079380 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" containerName="extract-utilities" Nov 25 07:14:58 crc kubenswrapper[4482]: E1125 07:14:58.079390 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64be5bc-6821-4ed8-9155-dcedbfaec076" containerName="registry-server" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079395 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64be5bc-6821-4ed8-9155-dcedbfaec076" containerName="registry-server" Nov 25 07:14:58 crc kubenswrapper[4482]: E1125 07:14:58.079400 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d66957-13fe-4c90-b512-d8e8e56e5e29" containerName="extract-utilities" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079406 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d66957-13fe-4c90-b512-d8e8e56e5e29" containerName="extract-utilities" Nov 25 07:14:58 crc kubenswrapper[4482]: E1125 07:14:58.079416 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64be5bc-6821-4ed8-9155-dcedbfaec076" containerName="extract-content" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079421 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64be5bc-6821-4ed8-9155-dcedbfaec076" containerName="extract-content" Nov 25 07:14:58 crc kubenswrapper[4482]: E1125 07:14:58.079432 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" containerName="registry-server" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079439 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" containerName="registry-server" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079597 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e53e303-f7ab-4b14-bc07-bbc46fa7bdd7" containerName="registry-server" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079609 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="c64be5bc-6821-4ed8-9155-dcedbfaec076" containerName="registry-server" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079618 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d66957-13fe-4c90-b512-d8e8e56e5e29" containerName="registry-server" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.079632 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="5369c6f0-a3ea-470c-bda2-abba45b2b4e6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.080211 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.082136 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.082454 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.082596 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.091840 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.095904 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr"] Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.222323 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/594404b7-8205-4bf9-b8d3-1547108e437e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr\" (UID: \"594404b7-8205-4bf9-b8d3-1547108e437e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.222524 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/594404b7-8205-4bf9-b8d3-1547108e437e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr\" (UID: \"594404b7-8205-4bf9-b8d3-1547108e437e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.222711 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzlx5\" (UniqueName: \"kubernetes.io/projected/594404b7-8205-4bf9-b8d3-1547108e437e-kube-api-access-xzlx5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr\" (UID: \"594404b7-8205-4bf9-b8d3-1547108e437e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.324037 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzlx5\" (UniqueName: \"kubernetes.io/projected/594404b7-8205-4bf9-b8d3-1547108e437e-kube-api-access-xzlx5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr\" (UID: \"594404b7-8205-4bf9-b8d3-1547108e437e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.324107 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/594404b7-8205-4bf9-b8d3-1547108e437e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr\" (UID: \"594404b7-8205-4bf9-b8d3-1547108e437e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.324157 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/594404b7-8205-4bf9-b8d3-1547108e437e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr\" (UID: \"594404b7-8205-4bf9-b8d3-1547108e437e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.329455 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/594404b7-8205-4bf9-b8d3-1547108e437e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr\" (UID: \"594404b7-8205-4bf9-b8d3-1547108e437e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.329643 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/594404b7-8205-4bf9-b8d3-1547108e437e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr\" (UID: \"594404b7-8205-4bf9-b8d3-1547108e437e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.348600 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzlx5\" (UniqueName: \"kubernetes.io/projected/594404b7-8205-4bf9-b8d3-1547108e437e-kube-api-access-xzlx5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr\" (UID: \"594404b7-8205-4bf9-b8d3-1547108e437e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.393086 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.820072 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr"] Nov 25 07:14:58 crc kubenswrapper[4482]: I1125 07:14:58.826618 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:14:59 crc kubenswrapper[4482]: I1125 07:14:59.011502 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" event={"ID":"594404b7-8205-4bf9-b8d3-1547108e437e","Type":"ContainerStarted","Data":"101ed24d2ecd58d30316b7f59fe4c994e0ad91b3444e8e7d2e29fd333f750081"} Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.021406 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" event={"ID":"594404b7-8205-4bf9-b8d3-1547108e437e","Type":"ContainerStarted","Data":"fcf8bbad8ef2a6280e1451eaaf5bfdf1c722d969546a66e57e835845a7d55241"} Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.037284 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" podStartSLOduration=1.4584134739999999 podStartE2EDuration="2.037270027s" podCreationTimestamp="2025-11-25 07:14:58 +0000 UTC" firstStartedPulling="2025-11-25 07:14:58.825583605 +0000 UTC m=+1673.313814864" lastFinishedPulling="2025-11-25 07:14:59.404440158 +0000 UTC m=+1673.892671417" observedRunningTime="2025-11-25 07:15:00.032217802 +0000 UTC m=+1674.520449061" watchObservedRunningTime="2025-11-25 07:15:00.037270027 +0000 UTC m=+1674.525501287" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.142738 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26"] Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.144524 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.146306 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.146522 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.160384 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26"] Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.254707 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c30f6098-f136-489a-a90a-e8e76cae8fcb-secret-volume\") pod \"collect-profiles-29400915-htw26\" (UID: \"c30f6098-f136-489a-a90a-e8e76cae8fcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.254791 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpn6q\" (UniqueName: \"kubernetes.io/projected/c30f6098-f136-489a-a90a-e8e76cae8fcb-kube-api-access-kpn6q\") pod \"collect-profiles-29400915-htw26\" (UID: \"c30f6098-f136-489a-a90a-e8e76cae8fcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.254825 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c30f6098-f136-489a-a90a-e8e76cae8fcb-config-volume\") pod \"collect-profiles-29400915-htw26\" (UID: \"c30f6098-f136-489a-a90a-e8e76cae8fcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.356476 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpn6q\" (UniqueName: \"kubernetes.io/projected/c30f6098-f136-489a-a90a-e8e76cae8fcb-kube-api-access-kpn6q\") pod \"collect-profiles-29400915-htw26\" (UID: \"c30f6098-f136-489a-a90a-e8e76cae8fcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.356525 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c30f6098-f136-489a-a90a-e8e76cae8fcb-config-volume\") pod \"collect-profiles-29400915-htw26\" (UID: \"c30f6098-f136-489a-a90a-e8e76cae8fcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.356616 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c30f6098-f136-489a-a90a-e8e76cae8fcb-secret-volume\") pod \"collect-profiles-29400915-htw26\" (UID: \"c30f6098-f136-489a-a90a-e8e76cae8fcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.357379 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c30f6098-f136-489a-a90a-e8e76cae8fcb-config-volume\") pod \"collect-profiles-29400915-htw26\" (UID: \"c30f6098-f136-489a-a90a-e8e76cae8fcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.360876 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c30f6098-f136-489a-a90a-e8e76cae8fcb-secret-volume\") pod \"collect-profiles-29400915-htw26\" (UID: \"c30f6098-f136-489a-a90a-e8e76cae8fcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.371692 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpn6q\" (UniqueName: \"kubernetes.io/projected/c30f6098-f136-489a-a90a-e8e76cae8fcb-kube-api-access-kpn6q\") pod \"collect-profiles-29400915-htw26\" (UID: \"c30f6098-f136-489a-a90a-e8e76cae8fcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.461757 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:00 crc kubenswrapper[4482]: I1125 07:15:00.836878 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26"] Nov 25 07:15:01 crc kubenswrapper[4482]: I1125 07:15:01.031727 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" event={"ID":"c30f6098-f136-489a-a90a-e8e76cae8fcb","Type":"ContainerStarted","Data":"ea67dddf7ba55afe9d550875bb8082d3c2b87c5b81287372d159ff050ab49763"} Nov 25 07:15:01 crc kubenswrapper[4482]: I1125 07:15:01.031931 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" event={"ID":"c30f6098-f136-489a-a90a-e8e76cae8fcb","Type":"ContainerStarted","Data":"1ea3e9c78f149b221c528d6abab34d1252c37996e743b048c4666768a79ab837"} Nov 25 07:15:01 crc kubenswrapper[4482]: I1125 07:15:01.051748 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" podStartSLOduration=1.05173741 podStartE2EDuration="1.05173741s" podCreationTimestamp="2025-11-25 07:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:15:01.044717235 +0000 UTC m=+1675.532948494" watchObservedRunningTime="2025-11-25 07:15:01.05173741 +0000 UTC m=+1675.539968659" Nov 25 07:15:02 crc kubenswrapper[4482]: I1125 07:15:02.039412 4482 generic.go:334] "Generic (PLEG): container finished" podID="c30f6098-f136-489a-a90a-e8e76cae8fcb" containerID="ea67dddf7ba55afe9d550875bb8082d3c2b87c5b81287372d159ff050ab49763" exitCode=0 Nov 25 07:15:02 crc kubenswrapper[4482]: I1125 07:15:02.039453 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" event={"ID":"c30f6098-f136-489a-a90a-e8e76cae8fcb","Type":"ContainerDied","Data":"ea67dddf7ba55afe9d550875bb8082d3c2b87c5b81287372d159ff050ab49763"} Nov 25 07:15:03 crc kubenswrapper[4482]: I1125 07:15:03.280977 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:03 crc kubenswrapper[4482]: I1125 07:15:03.413456 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c30f6098-f136-489a-a90a-e8e76cae8fcb-secret-volume\") pod \"c30f6098-f136-489a-a90a-e8e76cae8fcb\" (UID: \"c30f6098-f136-489a-a90a-e8e76cae8fcb\") " Nov 25 07:15:03 crc kubenswrapper[4482]: I1125 07:15:03.413621 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpn6q\" (UniqueName: \"kubernetes.io/projected/c30f6098-f136-489a-a90a-e8e76cae8fcb-kube-api-access-kpn6q\") pod \"c30f6098-f136-489a-a90a-e8e76cae8fcb\" (UID: \"c30f6098-f136-489a-a90a-e8e76cae8fcb\") " Nov 25 07:15:03 crc kubenswrapper[4482]: I1125 07:15:03.413835 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c30f6098-f136-489a-a90a-e8e76cae8fcb-config-volume\") pod \"c30f6098-f136-489a-a90a-e8e76cae8fcb\" (UID: \"c30f6098-f136-489a-a90a-e8e76cae8fcb\") " Nov 25 07:15:03 crc kubenswrapper[4482]: I1125 07:15:03.414361 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c30f6098-f136-489a-a90a-e8e76cae8fcb-config-volume" (OuterVolumeSpecName: "config-volume") pod "c30f6098-f136-489a-a90a-e8e76cae8fcb" (UID: "c30f6098-f136-489a-a90a-e8e76cae8fcb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:03 crc kubenswrapper[4482]: I1125 07:15:03.418274 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c30f6098-f136-489a-a90a-e8e76cae8fcb-kube-api-access-kpn6q" (OuterVolumeSpecName: "kube-api-access-kpn6q") pod "c30f6098-f136-489a-a90a-e8e76cae8fcb" (UID: "c30f6098-f136-489a-a90a-e8e76cae8fcb"). InnerVolumeSpecName "kube-api-access-kpn6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:03 crc kubenswrapper[4482]: I1125 07:15:03.418415 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c30f6098-f136-489a-a90a-e8e76cae8fcb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c30f6098-f136-489a-a90a-e8e76cae8fcb" (UID: "c30f6098-f136-489a-a90a-e8e76cae8fcb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:03 crc kubenswrapper[4482]: I1125 07:15:03.515863 4482 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c30f6098-f136-489a-a90a-e8e76cae8fcb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:03 crc kubenswrapper[4482]: I1125 07:15:03.515892 4482 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c30f6098-f136-489a-a90a-e8e76cae8fcb-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:03 crc kubenswrapper[4482]: I1125 07:15:03.515902 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpn6q\" (UniqueName: \"kubernetes.io/projected/c30f6098-f136-489a-a90a-e8e76cae8fcb-kube-api-access-kpn6q\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:04 crc kubenswrapper[4482]: I1125 07:15:04.053778 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" event={"ID":"c30f6098-f136-489a-a90a-e8e76cae8fcb","Type":"ContainerDied","Data":"1ea3e9c78f149b221c528d6abab34d1252c37996e743b048c4666768a79ab837"} Nov 25 07:15:04 crc kubenswrapper[4482]: I1125 07:15:04.053821 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ea3e9c78f149b221c528d6abab34d1252c37996e743b048c4666768a79ab837" Nov 25 07:15:04 crc kubenswrapper[4482]: I1125 07:15:04.053819 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26" Nov 25 07:15:04 crc kubenswrapper[4482]: I1125 07:15:04.830883 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:15:04 crc kubenswrapper[4482]: E1125 07:15:04.831314 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:15:16 crc kubenswrapper[4482]: I1125 07:15:16.030903 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwzh2"] Nov 25 07:15:16 crc kubenswrapper[4482]: I1125 07:15:16.038399 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwzh2"] Nov 25 07:15:17 crc kubenswrapper[4482]: I1125 07:15:17.840456 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d" path="/var/lib/kubelet/pods/cf5ca00e-c6fc-4e5e-abaf-2958a4e3239d/volumes" Nov 25 07:15:18 crc kubenswrapper[4482]: I1125 07:15:18.830841 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:15:18 crc kubenswrapper[4482]: E1125 07:15:18.831079 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:15:21 crc kubenswrapper[4482]: I1125 07:15:21.022051 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-vddpr"] Nov 25 07:15:21 crc kubenswrapper[4482]: I1125 07:15:21.032662 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-vddpr"] Nov 25 07:15:21 crc kubenswrapper[4482]: I1125 07:15:21.838765 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1909a799-3429-4fe2-adca-d756ae0c7c59" path="/var/lib/kubelet/pods/1909a799-3429-4fe2-adca-d756ae0c7c59/volumes" Nov 25 07:15:29 crc kubenswrapper[4482]: I1125 07:15:29.830491 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:15:29 crc kubenswrapper[4482]: E1125 07:15:29.831793 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:15:39 crc kubenswrapper[4482]: I1125 07:15:39.553595 4482 scope.go:117] "RemoveContainer" containerID="dc337e694aff42f5f1e50941d1fc9763e0bb538c31efd27659ef20f62153f7e9" Nov 25 07:15:39 crc kubenswrapper[4482]: I1125 07:15:39.575090 4482 scope.go:117] "RemoveContainer" containerID="01c7d5ff0000392ead9d789749415cd0ef192c17b400db44e5603e6a3540cb56" Nov 25 07:15:39 crc kubenswrapper[4482]: I1125 07:15:39.607304 4482 scope.go:117] "RemoveContainer" containerID="9dfc79e9ca51e0b4abf83b05a54ac2273275d7193b81548cec98fdbf415d0864" Nov 25 07:15:39 crc kubenswrapper[4482]: I1125 07:15:39.635027 4482 scope.go:117] "RemoveContainer" containerID="92238d549d3d1f1b3a9886cd6ca519323cb68b4f3696e31974afa748a8ab2ab7" Nov 25 07:15:40 crc kubenswrapper[4482]: I1125 07:15:40.830819 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:15:40 crc kubenswrapper[4482]: E1125 07:15:40.831282 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:15:55 crc kubenswrapper[4482]: I1125 07:15:55.835400 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:15:55 crc kubenswrapper[4482]: E1125 07:15:55.835997 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:15:57 crc kubenswrapper[4482]: I1125 07:15:57.416623 4482 generic.go:334] "Generic (PLEG): container finished" podID="594404b7-8205-4bf9-b8d3-1547108e437e" containerID="fcf8bbad8ef2a6280e1451eaaf5bfdf1c722d969546a66e57e835845a7d55241" exitCode=0 Nov 25 07:15:57 crc kubenswrapper[4482]: I1125 07:15:57.416692 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" event={"ID":"594404b7-8205-4bf9-b8d3-1547108e437e","Type":"ContainerDied","Data":"fcf8bbad8ef2a6280e1451eaaf5bfdf1c722d969546a66e57e835845a7d55241"} Nov 25 07:15:58 crc kubenswrapper[4482]: I1125 07:15:58.686209 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:15:58 crc kubenswrapper[4482]: I1125 07:15:58.885814 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzlx5\" (UniqueName: \"kubernetes.io/projected/594404b7-8205-4bf9-b8d3-1547108e437e-kube-api-access-xzlx5\") pod \"594404b7-8205-4bf9-b8d3-1547108e437e\" (UID: \"594404b7-8205-4bf9-b8d3-1547108e437e\") " Nov 25 07:15:58 crc kubenswrapper[4482]: I1125 07:15:58.885930 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/594404b7-8205-4bf9-b8d3-1547108e437e-inventory\") pod \"594404b7-8205-4bf9-b8d3-1547108e437e\" (UID: \"594404b7-8205-4bf9-b8d3-1547108e437e\") " Nov 25 07:15:58 crc kubenswrapper[4482]: I1125 07:15:58.886077 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/594404b7-8205-4bf9-b8d3-1547108e437e-ssh-key\") pod \"594404b7-8205-4bf9-b8d3-1547108e437e\" (UID: \"594404b7-8205-4bf9-b8d3-1547108e437e\") " Nov 25 07:15:58 crc kubenswrapper[4482]: I1125 07:15:58.890186 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594404b7-8205-4bf9-b8d3-1547108e437e-kube-api-access-xzlx5" (OuterVolumeSpecName: "kube-api-access-xzlx5") pod "594404b7-8205-4bf9-b8d3-1547108e437e" (UID: "594404b7-8205-4bf9-b8d3-1547108e437e"). InnerVolumeSpecName "kube-api-access-xzlx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:58 crc kubenswrapper[4482]: I1125 07:15:58.907961 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/594404b7-8205-4bf9-b8d3-1547108e437e-inventory" (OuterVolumeSpecName: "inventory") pod "594404b7-8205-4bf9-b8d3-1547108e437e" (UID: "594404b7-8205-4bf9-b8d3-1547108e437e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:58 crc kubenswrapper[4482]: I1125 07:15:58.910925 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/594404b7-8205-4bf9-b8d3-1547108e437e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "594404b7-8205-4bf9-b8d3-1547108e437e" (UID: "594404b7-8205-4bf9-b8d3-1547108e437e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:58 crc kubenswrapper[4482]: I1125 07:15:58.988340 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/594404b7-8205-4bf9-b8d3-1547108e437e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:58 crc kubenswrapper[4482]: I1125 07:15:58.988445 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzlx5\" (UniqueName: \"kubernetes.io/projected/594404b7-8205-4bf9-b8d3-1547108e437e-kube-api-access-xzlx5\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:58 crc kubenswrapper[4482]: I1125 07:15:58.988505 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/594404b7-8205-4bf9-b8d3-1547108e437e-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.431806 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" event={"ID":"594404b7-8205-4bf9-b8d3-1547108e437e","Type":"ContainerDied","Data":"101ed24d2ecd58d30316b7f59fe4c994e0ad91b3444e8e7d2e29fd333f750081"} Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.431840 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="101ed24d2ecd58d30316b7f59fe4c994e0ad91b3444e8e7d2e29fd333f750081" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.431846 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ppwsr" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.492433 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf"] Nov 25 07:15:59 crc kubenswrapper[4482]: E1125 07:15:59.492827 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594404b7-8205-4bf9-b8d3-1547108e437e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.492845 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="594404b7-8205-4bf9-b8d3-1547108e437e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:15:59 crc kubenswrapper[4482]: E1125 07:15:59.492878 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c30f6098-f136-489a-a90a-e8e76cae8fcb" containerName="collect-profiles" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.492884 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="c30f6098-f136-489a-a90a-e8e76cae8fcb" containerName="collect-profiles" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.493065 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="594404b7-8205-4bf9-b8d3-1547108e437e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.493085 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="c30f6098-f136-489a-a90a-e8e76cae8fcb" containerName="collect-profiles" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.493777 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.503385 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.504765 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.505031 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.505125 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.505150 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b59b4a45-6172-4fe8-9255-bbe646dbc92e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf\" (UID: \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.505276 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7hsp\" (UniqueName: \"kubernetes.io/projected/b59b4a45-6172-4fe8-9255-bbe646dbc92e-kube-api-access-h7hsp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf\" (UID: \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.505373 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b59b4a45-6172-4fe8-9255-bbe646dbc92e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf\" (UID: \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.515062 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf"] Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.607070 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b59b4a45-6172-4fe8-9255-bbe646dbc92e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf\" (UID: \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.607485 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b59b4a45-6172-4fe8-9255-bbe646dbc92e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf\" (UID: \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.607527 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7hsp\" (UniqueName: \"kubernetes.io/projected/b59b4a45-6172-4fe8-9255-bbe646dbc92e-kube-api-access-h7hsp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf\" (UID: \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.611514 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b59b4a45-6172-4fe8-9255-bbe646dbc92e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf\" (UID: \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.611951 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b59b4a45-6172-4fe8-9255-bbe646dbc92e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf\" (UID: \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.621379 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7hsp\" (UniqueName: \"kubernetes.io/projected/b59b4a45-6172-4fe8-9255-bbe646dbc92e-kube-api-access-h7hsp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf\" (UID: \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:15:59 crc kubenswrapper[4482]: I1125 07:15:59.808790 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:16:00 crc kubenswrapper[4482]: I1125 07:16:00.237463 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf"] Nov 25 07:16:00 crc kubenswrapper[4482]: I1125 07:16:00.439224 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" event={"ID":"b59b4a45-6172-4fe8-9255-bbe646dbc92e","Type":"ContainerStarted","Data":"414467aaed07fb9eb81b42cdaa7b9582abc46580fcd11ed698282bfc05346e60"} Nov 25 07:16:01 crc kubenswrapper[4482]: I1125 07:16:01.448029 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" event={"ID":"b59b4a45-6172-4fe8-9255-bbe646dbc92e","Type":"ContainerStarted","Data":"52a2dd9e6f50293801a3a6c56be153fb0f9e7dc0684bca90750604c7516a403c"} Nov 25 07:16:01 crc kubenswrapper[4482]: I1125 07:16:01.467402 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" podStartSLOduration=1.9071920580000001 podStartE2EDuration="2.467388049s" podCreationTimestamp="2025-11-25 07:15:59 +0000 UTC" firstStartedPulling="2025-11-25 07:16:00.247594241 +0000 UTC m=+1734.735825501" lastFinishedPulling="2025-11-25 07:16:00.807790234 +0000 UTC m=+1735.296021492" observedRunningTime="2025-11-25 07:16:01.459462375 +0000 UTC m=+1735.947693635" watchObservedRunningTime="2025-11-25 07:16:01.467388049 +0000 UTC m=+1735.955619308" Nov 25 07:16:02 crc kubenswrapper[4482]: I1125 07:16:02.034960 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-n9bvq"] Nov 25 07:16:02 crc kubenswrapper[4482]: I1125 07:16:02.041371 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-n9bvq"] Nov 25 07:16:03 crc kubenswrapper[4482]: I1125 07:16:03.839646 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9dd329e-7514-4dbf-9e8f-e34467fa66ab" path="/var/lib/kubelet/pods/f9dd329e-7514-4dbf-9e8f-e34467fa66ab/volumes" Nov 25 07:16:04 crc kubenswrapper[4482]: I1125 07:16:04.470646 4482 generic.go:334] "Generic (PLEG): container finished" podID="b59b4a45-6172-4fe8-9255-bbe646dbc92e" containerID="52a2dd9e6f50293801a3a6c56be153fb0f9e7dc0684bca90750604c7516a403c" exitCode=0 Nov 25 07:16:04 crc kubenswrapper[4482]: I1125 07:16:04.470686 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" event={"ID":"b59b4a45-6172-4fe8-9255-bbe646dbc92e","Type":"ContainerDied","Data":"52a2dd9e6f50293801a3a6c56be153fb0f9e7dc0684bca90750604c7516a403c"} Nov 25 07:16:05 crc kubenswrapper[4482]: I1125 07:16:05.784749 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:16:05 crc kubenswrapper[4482]: I1125 07:16:05.901551 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b59b4a45-6172-4fe8-9255-bbe646dbc92e-inventory\") pod \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\" (UID: \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\") " Nov 25 07:16:05 crc kubenswrapper[4482]: I1125 07:16:05.901798 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b59b4a45-6172-4fe8-9255-bbe646dbc92e-ssh-key\") pod \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\" (UID: \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\") " Nov 25 07:16:05 crc kubenswrapper[4482]: I1125 07:16:05.901927 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7hsp\" (UniqueName: \"kubernetes.io/projected/b59b4a45-6172-4fe8-9255-bbe646dbc92e-kube-api-access-h7hsp\") pod \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\" (UID: \"b59b4a45-6172-4fe8-9255-bbe646dbc92e\") " Nov 25 07:16:05 crc kubenswrapper[4482]: I1125 07:16:05.905890 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59b4a45-6172-4fe8-9255-bbe646dbc92e-kube-api-access-h7hsp" (OuterVolumeSpecName: "kube-api-access-h7hsp") pod "b59b4a45-6172-4fe8-9255-bbe646dbc92e" (UID: "b59b4a45-6172-4fe8-9255-bbe646dbc92e"). InnerVolumeSpecName "kube-api-access-h7hsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:16:05 crc kubenswrapper[4482]: I1125 07:16:05.923595 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b59b4a45-6172-4fe8-9255-bbe646dbc92e-inventory" (OuterVolumeSpecName: "inventory") pod "b59b4a45-6172-4fe8-9255-bbe646dbc92e" (UID: "b59b4a45-6172-4fe8-9255-bbe646dbc92e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:16:05 crc kubenswrapper[4482]: I1125 07:16:05.925650 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b59b4a45-6172-4fe8-9255-bbe646dbc92e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b59b4a45-6172-4fe8-9255-bbe646dbc92e" (UID: "b59b4a45-6172-4fe8-9255-bbe646dbc92e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.003969 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b59b4a45-6172-4fe8-9255-bbe646dbc92e-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.003994 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b59b4a45-6172-4fe8-9255-bbe646dbc92e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.004004 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7hsp\" (UniqueName: \"kubernetes.io/projected/b59b4a45-6172-4fe8-9255-bbe646dbc92e-kube-api-access-h7hsp\") on node \"crc\" DevicePath \"\"" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.487443 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" event={"ID":"b59b4a45-6172-4fe8-9255-bbe646dbc92e","Type":"ContainerDied","Data":"414467aaed07fb9eb81b42cdaa7b9582abc46580fcd11ed698282bfc05346e60"} Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.487483 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="414467aaed07fb9eb81b42cdaa7b9582abc46580fcd11ed698282bfc05346e60" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.487486 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vdhqf" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.544912 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47"] Nov 25 07:16:06 crc kubenswrapper[4482]: E1125 07:16:06.545396 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59b4a45-6172-4fe8-9255-bbe646dbc92e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.545414 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59b4a45-6172-4fe8-9255-bbe646dbc92e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.545617 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="b59b4a45-6172-4fe8-9255-bbe646dbc92e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.546283 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.547682 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.549706 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.550001 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.550429 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.566821 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47"] Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.715084 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/299c9424-eaef-4472-855a-028b197973dd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rlk47\" (UID: \"299c9424-eaef-4472-855a-028b197973dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.715583 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh6bn\" (UniqueName: \"kubernetes.io/projected/299c9424-eaef-4472-855a-028b197973dd-kube-api-access-hh6bn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rlk47\" (UID: \"299c9424-eaef-4472-855a-028b197973dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.715679 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/299c9424-eaef-4472-855a-028b197973dd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rlk47\" (UID: \"299c9424-eaef-4472-855a-028b197973dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.817230 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/299c9424-eaef-4472-855a-028b197973dd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rlk47\" (UID: \"299c9424-eaef-4472-855a-028b197973dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.817308 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/299c9424-eaef-4472-855a-028b197973dd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rlk47\" (UID: \"299c9424-eaef-4472-855a-028b197973dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.817365 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh6bn\" (UniqueName: \"kubernetes.io/projected/299c9424-eaef-4472-855a-028b197973dd-kube-api-access-hh6bn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rlk47\" (UID: \"299c9424-eaef-4472-855a-028b197973dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.821891 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/299c9424-eaef-4472-855a-028b197973dd-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rlk47\" (UID: \"299c9424-eaef-4472-855a-028b197973dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.822308 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/299c9424-eaef-4472-855a-028b197973dd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rlk47\" (UID: \"299c9424-eaef-4472-855a-028b197973dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.830104 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh6bn\" (UniqueName: \"kubernetes.io/projected/299c9424-eaef-4472-855a-028b197973dd-kube-api-access-hh6bn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rlk47\" (UID: \"299c9424-eaef-4472-855a-028b197973dd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:06 crc kubenswrapper[4482]: I1125 07:16:06.857474 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:07 crc kubenswrapper[4482]: I1125 07:16:07.274023 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47"] Nov 25 07:16:07 crc kubenswrapper[4482]: I1125 07:16:07.495800 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" event={"ID":"299c9424-eaef-4472-855a-028b197973dd","Type":"ContainerStarted","Data":"f7e1cadce8447ccd4db27b488201ef4c528b3f88e2bbed53ffd8c612f9723fd3"} Nov 25 07:16:07 crc kubenswrapper[4482]: I1125 07:16:07.831306 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:16:07 crc kubenswrapper[4482]: E1125 07:16:07.831745 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:16:08 crc kubenswrapper[4482]: I1125 07:16:08.504270 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" event={"ID":"299c9424-eaef-4472-855a-028b197973dd","Type":"ContainerStarted","Data":"693530445f5e09f8d8520fb2c0a8276d22a4b76d7fee09236370820db7620bc9"} Nov 25 07:16:08 crc kubenswrapper[4482]: I1125 07:16:08.522381 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" podStartSLOduration=1.958210408 podStartE2EDuration="2.522369121s" podCreationTimestamp="2025-11-25 07:16:06 +0000 UTC" firstStartedPulling="2025-11-25 07:16:07.275380072 +0000 UTC m=+1741.763611331" lastFinishedPulling="2025-11-25 07:16:07.839538785 +0000 UTC m=+1742.327770044" observedRunningTime="2025-11-25 07:16:08.519092392 +0000 UTC m=+1743.007323652" watchObservedRunningTime="2025-11-25 07:16:08.522369121 +0000 UTC m=+1743.010600380" Nov 25 07:16:20 crc kubenswrapper[4482]: I1125 07:16:20.831157 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:16:20 crc kubenswrapper[4482]: E1125 07:16:20.831774 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:16:35 crc kubenswrapper[4482]: I1125 07:16:35.714870 4482 generic.go:334] "Generic (PLEG): container finished" podID="299c9424-eaef-4472-855a-028b197973dd" containerID="693530445f5e09f8d8520fb2c0a8276d22a4b76d7fee09236370820db7620bc9" exitCode=0 Nov 25 07:16:35 crc kubenswrapper[4482]: I1125 07:16:35.714950 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" event={"ID":"299c9424-eaef-4472-855a-028b197973dd","Type":"ContainerDied","Data":"693530445f5e09f8d8520fb2c0a8276d22a4b76d7fee09236370820db7620bc9"} Nov 25 07:16:35 crc kubenswrapper[4482]: I1125 07:16:35.837310 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:16:35 crc kubenswrapper[4482]: E1125 07:16:35.837531 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.020888 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.197414 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/299c9424-eaef-4472-855a-028b197973dd-ssh-key\") pod \"299c9424-eaef-4472-855a-028b197973dd\" (UID: \"299c9424-eaef-4472-855a-028b197973dd\") " Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.197559 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh6bn\" (UniqueName: \"kubernetes.io/projected/299c9424-eaef-4472-855a-028b197973dd-kube-api-access-hh6bn\") pod \"299c9424-eaef-4472-855a-028b197973dd\" (UID: \"299c9424-eaef-4472-855a-028b197973dd\") " Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.197762 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/299c9424-eaef-4472-855a-028b197973dd-inventory\") pod \"299c9424-eaef-4472-855a-028b197973dd\" (UID: \"299c9424-eaef-4472-855a-028b197973dd\") " Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.202224 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/299c9424-eaef-4472-855a-028b197973dd-kube-api-access-hh6bn" (OuterVolumeSpecName: "kube-api-access-hh6bn") pod "299c9424-eaef-4472-855a-028b197973dd" (UID: "299c9424-eaef-4472-855a-028b197973dd"). InnerVolumeSpecName "kube-api-access-hh6bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.219842 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/299c9424-eaef-4472-855a-028b197973dd-inventory" (OuterVolumeSpecName: "inventory") pod "299c9424-eaef-4472-855a-028b197973dd" (UID: "299c9424-eaef-4472-855a-028b197973dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.220234 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/299c9424-eaef-4472-855a-028b197973dd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "299c9424-eaef-4472-855a-028b197973dd" (UID: "299c9424-eaef-4472-855a-028b197973dd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.299553 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/299c9424-eaef-4472-855a-028b197973dd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.299574 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh6bn\" (UniqueName: \"kubernetes.io/projected/299c9424-eaef-4472-855a-028b197973dd-kube-api-access-hh6bn\") on node \"crc\" DevicePath \"\"" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.299585 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/299c9424-eaef-4472-855a-028b197973dd-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.729803 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" event={"ID":"299c9424-eaef-4472-855a-028b197973dd","Type":"ContainerDied","Data":"f7e1cadce8447ccd4db27b488201ef4c528b3f88e2bbed53ffd8c612f9723fd3"} Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.730225 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7e1cadce8447ccd4db27b488201ef4c528b3f88e2bbed53ffd8c612f9723fd3" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.729883 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rlk47" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.794217 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5"] Nov 25 07:16:37 crc kubenswrapper[4482]: E1125 07:16:37.794679 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="299c9424-eaef-4472-855a-028b197973dd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.794696 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="299c9424-eaef-4472-855a-028b197973dd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.794878 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="299c9424-eaef-4472-855a-028b197973dd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.795540 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.797599 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.798576 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.798719 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.798907 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.799728 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5"] Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.918427 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32608924-e17a-4e6f-80db-f8a9a3e1c14b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bpld5\" (UID: \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.918487 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32608924-e17a-4e6f-80db-f8a9a3e1c14b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bpld5\" (UID: \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:16:37 crc kubenswrapper[4482]: I1125 07:16:37.918621 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwxnz\" (UniqueName: \"kubernetes.io/projected/32608924-e17a-4e6f-80db-f8a9a3e1c14b-kube-api-access-rwxnz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bpld5\" (UID: \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:16:38 crc kubenswrapper[4482]: I1125 07:16:38.021052 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32608924-e17a-4e6f-80db-f8a9a3e1c14b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bpld5\" (UID: \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:16:38 crc kubenswrapper[4482]: I1125 07:16:38.021557 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32608924-e17a-4e6f-80db-f8a9a3e1c14b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bpld5\" (UID: \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:16:38 crc kubenswrapper[4482]: I1125 07:16:38.021699 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwxnz\" (UniqueName: \"kubernetes.io/projected/32608924-e17a-4e6f-80db-f8a9a3e1c14b-kube-api-access-rwxnz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bpld5\" (UID: \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:16:38 crc kubenswrapper[4482]: I1125 07:16:38.026702 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32608924-e17a-4e6f-80db-f8a9a3e1c14b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bpld5\" (UID: \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:16:38 crc kubenswrapper[4482]: I1125 07:16:38.041022 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32608924-e17a-4e6f-80db-f8a9a3e1c14b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bpld5\" (UID: \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:16:38 crc kubenswrapper[4482]: I1125 07:16:38.051578 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwxnz\" (UniqueName: \"kubernetes.io/projected/32608924-e17a-4e6f-80db-f8a9a3e1c14b-kube-api-access-rwxnz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bpld5\" (UID: \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:16:38 crc kubenswrapper[4482]: I1125 07:16:38.119126 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:16:38 crc kubenswrapper[4482]: I1125 07:16:38.475114 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5"] Nov 25 07:16:38 crc kubenswrapper[4482]: I1125 07:16:38.738963 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" event={"ID":"32608924-e17a-4e6f-80db-f8a9a3e1c14b","Type":"ContainerStarted","Data":"ba4a9aeea2b1d808ad293ceab378f5cb27a124dca254a8a60ad0f5326182b20b"} Nov 25 07:16:39 crc kubenswrapper[4482]: I1125 07:16:39.730672 4482 scope.go:117] "RemoveContainer" containerID="3132db4959392cd254b5a13deb1af5c3b426f3737606912f1d136bc6c5461ae5" Nov 25 07:16:39 crc kubenswrapper[4482]: I1125 07:16:39.753765 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" event={"ID":"32608924-e17a-4e6f-80db-f8a9a3e1c14b","Type":"ContainerStarted","Data":"04acbe7311df9d09b124bdd9e27308e586cee65164d5283b0585465ec92183fe"} Nov 25 07:16:39 crc kubenswrapper[4482]: I1125 07:16:39.774768 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" podStartSLOduration=2.179945957 podStartE2EDuration="2.774754675s" podCreationTimestamp="2025-11-25 07:16:37 +0000 UTC" firstStartedPulling="2025-11-25 07:16:38.466971481 +0000 UTC m=+1772.955202730" lastFinishedPulling="2025-11-25 07:16:39.061780189 +0000 UTC m=+1773.550011448" observedRunningTime="2025-11-25 07:16:39.771736916 +0000 UTC m=+1774.259968175" watchObservedRunningTime="2025-11-25 07:16:39.774754675 +0000 UTC m=+1774.262985933" Nov 25 07:16:47 crc kubenswrapper[4482]: I1125 07:16:47.830744 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:16:47 crc kubenswrapper[4482]: E1125 07:16:47.831403 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:16:59 crc kubenswrapper[4482]: I1125 07:16:59.830505 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:16:59 crc kubenswrapper[4482]: E1125 07:16:59.831076 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:17:14 crc kubenswrapper[4482]: I1125 07:17:14.831112 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:17:14 crc kubenswrapper[4482]: E1125 07:17:14.831641 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:17:16 crc kubenswrapper[4482]: I1125 07:17:16.010967 4482 generic.go:334] "Generic (PLEG): container finished" podID="32608924-e17a-4e6f-80db-f8a9a3e1c14b" containerID="04acbe7311df9d09b124bdd9e27308e586cee65164d5283b0585465ec92183fe" exitCode=0 Nov 25 07:17:16 crc kubenswrapper[4482]: I1125 07:17:16.011001 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" event={"ID":"32608924-e17a-4e6f-80db-f8a9a3e1c14b","Type":"ContainerDied","Data":"04acbe7311df9d09b124bdd9e27308e586cee65164d5283b0585465ec92183fe"} Nov 25 07:17:17 crc kubenswrapper[4482]: I1125 07:17:17.298237 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:17:17 crc kubenswrapper[4482]: I1125 07:17:17.379149 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32608924-e17a-4e6f-80db-f8a9a3e1c14b-inventory\") pod \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\" (UID: \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\") " Nov 25 07:17:17 crc kubenswrapper[4482]: I1125 07:17:17.379231 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32608924-e17a-4e6f-80db-f8a9a3e1c14b-ssh-key\") pod \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\" (UID: \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\") " Nov 25 07:17:17 crc kubenswrapper[4482]: I1125 07:17:17.379284 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwxnz\" (UniqueName: \"kubernetes.io/projected/32608924-e17a-4e6f-80db-f8a9a3e1c14b-kube-api-access-rwxnz\") pod \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\" (UID: \"32608924-e17a-4e6f-80db-f8a9a3e1c14b\") " Nov 25 07:17:17 crc kubenswrapper[4482]: I1125 07:17:17.383545 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32608924-e17a-4e6f-80db-f8a9a3e1c14b-kube-api-access-rwxnz" (OuterVolumeSpecName: "kube-api-access-rwxnz") pod "32608924-e17a-4e6f-80db-f8a9a3e1c14b" (UID: "32608924-e17a-4e6f-80db-f8a9a3e1c14b"). InnerVolumeSpecName "kube-api-access-rwxnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:17:17 crc kubenswrapper[4482]: I1125 07:17:17.403345 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32608924-e17a-4e6f-80db-f8a9a3e1c14b-inventory" (OuterVolumeSpecName: "inventory") pod "32608924-e17a-4e6f-80db-f8a9a3e1c14b" (UID: "32608924-e17a-4e6f-80db-f8a9a3e1c14b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:17:17 crc kubenswrapper[4482]: I1125 07:17:17.403684 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32608924-e17a-4e6f-80db-f8a9a3e1c14b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "32608924-e17a-4e6f-80db-f8a9a3e1c14b" (UID: "32608924-e17a-4e6f-80db-f8a9a3e1c14b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:17:17 crc kubenswrapper[4482]: I1125 07:17:17.482131 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32608924-e17a-4e6f-80db-f8a9a3e1c14b-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:17:17 crc kubenswrapper[4482]: I1125 07:17:17.482189 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32608924-e17a-4e6f-80db-f8a9a3e1c14b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:17:17 crc kubenswrapper[4482]: I1125 07:17:17.482201 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwxnz\" (UniqueName: \"kubernetes.io/projected/32608924-e17a-4e6f-80db-f8a9a3e1c14b-kube-api-access-rwxnz\") on node \"crc\" DevicePath \"\"" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.026202 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" event={"ID":"32608924-e17a-4e6f-80db-f8a9a3e1c14b","Type":"ContainerDied","Data":"ba4a9aeea2b1d808ad293ceab378f5cb27a124dca254a8a60ad0f5326182b20b"} Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.026240 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba4a9aeea2b1d808ad293ceab378f5cb27a124dca254a8a60ad0f5326182b20b" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.026243 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bpld5" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.090490 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7kvpr"] Nov 25 07:17:18 crc kubenswrapper[4482]: E1125 07:17:18.091018 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32608924-e17a-4e6f-80db-f8a9a3e1c14b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.091034 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="32608924-e17a-4e6f-80db-f8a9a3e1c14b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.093648 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="32608924-e17a-4e6f-80db-f8a9a3e1c14b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.094255 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.096537 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.096574 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.096997 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.097013 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.100336 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7kvpr"] Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.294029 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6629889e-6140-4fec-b44e-aed6f31f35d4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7kvpr\" (UID: \"6629889e-6140-4fec-b44e-aed6f31f35d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.294104 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6629889e-6140-4fec-b44e-aed6f31f35d4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7kvpr\" (UID: \"6629889e-6140-4fec-b44e-aed6f31f35d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.294126 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbswd\" (UniqueName: \"kubernetes.io/projected/6629889e-6140-4fec-b44e-aed6f31f35d4-kube-api-access-gbswd\") pod \"ssh-known-hosts-edpm-deployment-7kvpr\" (UID: \"6629889e-6140-4fec-b44e-aed6f31f35d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.396784 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6629889e-6140-4fec-b44e-aed6f31f35d4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7kvpr\" (UID: \"6629889e-6140-4fec-b44e-aed6f31f35d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.396922 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6629889e-6140-4fec-b44e-aed6f31f35d4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7kvpr\" (UID: \"6629889e-6140-4fec-b44e-aed6f31f35d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.396946 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbswd\" (UniqueName: \"kubernetes.io/projected/6629889e-6140-4fec-b44e-aed6f31f35d4-kube-api-access-gbswd\") pod \"ssh-known-hosts-edpm-deployment-7kvpr\" (UID: \"6629889e-6140-4fec-b44e-aed6f31f35d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.402434 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6629889e-6140-4fec-b44e-aed6f31f35d4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7kvpr\" (UID: \"6629889e-6140-4fec-b44e-aed6f31f35d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.402899 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6629889e-6140-4fec-b44e-aed6f31f35d4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7kvpr\" (UID: \"6629889e-6140-4fec-b44e-aed6f31f35d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.410486 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbswd\" (UniqueName: \"kubernetes.io/projected/6629889e-6140-4fec-b44e-aed6f31f35d4-kube-api-access-gbswd\") pod \"ssh-known-hosts-edpm-deployment-7kvpr\" (UID: \"6629889e-6140-4fec-b44e-aed6f31f35d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:18 crc kubenswrapper[4482]: I1125 07:17:18.707784 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:19 crc kubenswrapper[4482]: I1125 07:17:19.152416 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7kvpr"] Nov 25 07:17:20 crc kubenswrapper[4482]: I1125 07:17:20.041226 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" event={"ID":"6629889e-6140-4fec-b44e-aed6f31f35d4","Type":"ContainerStarted","Data":"dfd1b1f3fcb61b598f670e11d583a0c52e9bcd55ef9b919eced6c86648bfe4f0"} Nov 25 07:17:21 crc kubenswrapper[4482]: I1125 07:17:21.049462 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" event={"ID":"6629889e-6140-4fec-b44e-aed6f31f35d4","Type":"ContainerStarted","Data":"cf047b7292228b411ad3232eff15d6daf3de433f1c7d84b53d7630b235fd2fb1"} Nov 25 07:17:21 crc kubenswrapper[4482]: I1125 07:17:21.061279 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" podStartSLOduration=2.304017882 podStartE2EDuration="3.061259787s" podCreationTimestamp="2025-11-25 07:17:18 +0000 UTC" firstStartedPulling="2025-11-25 07:17:19.158069778 +0000 UTC m=+1813.646301037" lastFinishedPulling="2025-11-25 07:17:19.915311683 +0000 UTC m=+1814.403542942" observedRunningTime="2025-11-25 07:17:21.060486168 +0000 UTC m=+1815.548717428" watchObservedRunningTime="2025-11-25 07:17:21.061259787 +0000 UTC m=+1815.549491036" Nov 25 07:17:25 crc kubenswrapper[4482]: I1125 07:17:25.080550 4482 generic.go:334] "Generic (PLEG): container finished" podID="6629889e-6140-4fec-b44e-aed6f31f35d4" containerID="cf047b7292228b411ad3232eff15d6daf3de433f1c7d84b53d7630b235fd2fb1" exitCode=0 Nov 25 07:17:25 crc kubenswrapper[4482]: I1125 07:17:25.080620 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" event={"ID":"6629889e-6140-4fec-b44e-aed6f31f35d4","Type":"ContainerDied","Data":"cf047b7292228b411ad3232eff15d6daf3de433f1c7d84b53d7630b235fd2fb1"} Nov 25 07:17:25 crc kubenswrapper[4482]: I1125 07:17:25.836104 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:17:25 crc kubenswrapper[4482]: E1125 07:17:25.836491 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:17:26 crc kubenswrapper[4482]: I1125 07:17:26.394223 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:26 crc kubenswrapper[4482]: I1125 07:17:26.436674 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbswd\" (UniqueName: \"kubernetes.io/projected/6629889e-6140-4fec-b44e-aed6f31f35d4-kube-api-access-gbswd\") pod \"6629889e-6140-4fec-b44e-aed6f31f35d4\" (UID: \"6629889e-6140-4fec-b44e-aed6f31f35d4\") " Nov 25 07:17:26 crc kubenswrapper[4482]: I1125 07:17:26.436832 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6629889e-6140-4fec-b44e-aed6f31f35d4-ssh-key-openstack-edpm-ipam\") pod \"6629889e-6140-4fec-b44e-aed6f31f35d4\" (UID: \"6629889e-6140-4fec-b44e-aed6f31f35d4\") " Nov 25 07:17:26 crc kubenswrapper[4482]: I1125 07:17:26.436942 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6629889e-6140-4fec-b44e-aed6f31f35d4-inventory-0\") pod \"6629889e-6140-4fec-b44e-aed6f31f35d4\" (UID: \"6629889e-6140-4fec-b44e-aed6f31f35d4\") " Nov 25 07:17:26 crc kubenswrapper[4482]: I1125 07:17:26.446225 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6629889e-6140-4fec-b44e-aed6f31f35d4-kube-api-access-gbswd" (OuterVolumeSpecName: "kube-api-access-gbswd") pod "6629889e-6140-4fec-b44e-aed6f31f35d4" (UID: "6629889e-6140-4fec-b44e-aed6f31f35d4"). InnerVolumeSpecName "kube-api-access-gbswd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:17:26 crc kubenswrapper[4482]: I1125 07:17:26.458320 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6629889e-6140-4fec-b44e-aed6f31f35d4-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "6629889e-6140-4fec-b44e-aed6f31f35d4" (UID: "6629889e-6140-4fec-b44e-aed6f31f35d4"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:17:26 crc kubenswrapper[4482]: I1125 07:17:26.458500 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6629889e-6140-4fec-b44e-aed6f31f35d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6629889e-6140-4fec-b44e-aed6f31f35d4" (UID: "6629889e-6140-4fec-b44e-aed6f31f35d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:17:26 crc kubenswrapper[4482]: I1125 07:17:26.539181 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6629889e-6140-4fec-b44e-aed6f31f35d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 07:17:26 crc kubenswrapper[4482]: I1125 07:17:26.539213 4482 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6629889e-6140-4fec-b44e-aed6f31f35d4-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:17:26 crc kubenswrapper[4482]: I1125 07:17:26.539227 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbswd\" (UniqueName: \"kubernetes.io/projected/6629889e-6140-4fec-b44e-aed6f31f35d4-kube-api-access-gbswd\") on node \"crc\" DevicePath \"\"" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.096940 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" event={"ID":"6629889e-6140-4fec-b44e-aed6f31f35d4","Type":"ContainerDied","Data":"dfd1b1f3fcb61b598f670e11d583a0c52e9bcd55ef9b919eced6c86648bfe4f0"} Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.096976 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd1b1f3fcb61b598f670e11d583a0c52e9bcd55ef9b919eced6c86648bfe4f0" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.097143 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7kvpr" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.156294 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b"] Nov 25 07:17:27 crc kubenswrapper[4482]: E1125 07:17:27.156628 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6629889e-6140-4fec-b44e-aed6f31f35d4" containerName="ssh-known-hosts-edpm-deployment" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.156646 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="6629889e-6140-4fec-b44e-aed6f31f35d4" containerName="ssh-known-hosts-edpm-deployment" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.156840 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="6629889e-6140-4fec-b44e-aed6f31f35d4" containerName="ssh-known-hosts-edpm-deployment" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.157440 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.159403 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.159401 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.160017 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.161671 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.171398 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b"] Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.250411 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/139f24cb-9278-4d08-9a47-773461fa73ad-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-px86b\" (UID: \"139f24cb-9278-4d08-9a47-773461fa73ad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.250469 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/139f24cb-9278-4d08-9a47-773461fa73ad-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-px86b\" (UID: \"139f24cb-9278-4d08-9a47-773461fa73ad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.250497 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4mxq\" (UniqueName: \"kubernetes.io/projected/139f24cb-9278-4d08-9a47-773461fa73ad-kube-api-access-h4mxq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-px86b\" (UID: \"139f24cb-9278-4d08-9a47-773461fa73ad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.352280 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/139f24cb-9278-4d08-9a47-773461fa73ad-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-px86b\" (UID: \"139f24cb-9278-4d08-9a47-773461fa73ad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.352446 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/139f24cb-9278-4d08-9a47-773461fa73ad-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-px86b\" (UID: \"139f24cb-9278-4d08-9a47-773461fa73ad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.352545 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4mxq\" (UniqueName: \"kubernetes.io/projected/139f24cb-9278-4d08-9a47-773461fa73ad-kube-api-access-h4mxq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-px86b\" (UID: \"139f24cb-9278-4d08-9a47-773461fa73ad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.357823 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/139f24cb-9278-4d08-9a47-773461fa73ad-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-px86b\" (UID: \"139f24cb-9278-4d08-9a47-773461fa73ad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.357969 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/139f24cb-9278-4d08-9a47-773461fa73ad-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-px86b\" (UID: \"139f24cb-9278-4d08-9a47-773461fa73ad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.364401 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4mxq\" (UniqueName: \"kubernetes.io/projected/139f24cb-9278-4d08-9a47-773461fa73ad-kube-api-access-h4mxq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-px86b\" (UID: \"139f24cb-9278-4d08-9a47-773461fa73ad\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.469407 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:27 crc kubenswrapper[4482]: I1125 07:17:27.914885 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b"] Nov 25 07:17:28 crc kubenswrapper[4482]: I1125 07:17:28.105658 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" event={"ID":"139f24cb-9278-4d08-9a47-773461fa73ad","Type":"ContainerStarted","Data":"2fed287e9a9184c55d6a41f291f358fb13cc1accad45128c20470d782739352e"} Nov 25 07:17:29 crc kubenswrapper[4482]: I1125 07:17:29.113835 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" event={"ID":"139f24cb-9278-4d08-9a47-773461fa73ad","Type":"ContainerStarted","Data":"22a89559ab7b10829a2d9e2dbabe8bc545880e542d81e9fee0329d69f3f8a871"} Nov 25 07:17:29 crc kubenswrapper[4482]: I1125 07:17:29.126961 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" podStartSLOduration=1.641655476 podStartE2EDuration="2.126950217s" podCreationTimestamp="2025-11-25 07:17:27 +0000 UTC" firstStartedPulling="2025-11-25 07:17:27.920005123 +0000 UTC m=+1822.408236382" lastFinishedPulling="2025-11-25 07:17:28.405299865 +0000 UTC m=+1822.893531123" observedRunningTime="2025-11-25 07:17:29.123490354 +0000 UTC m=+1823.611721613" watchObservedRunningTime="2025-11-25 07:17:29.126950217 +0000 UTC m=+1823.615181476" Nov 25 07:17:35 crc kubenswrapper[4482]: I1125 07:17:35.157830 4482 generic.go:334] "Generic (PLEG): container finished" podID="139f24cb-9278-4d08-9a47-773461fa73ad" containerID="22a89559ab7b10829a2d9e2dbabe8bc545880e542d81e9fee0329d69f3f8a871" exitCode=0 Nov 25 07:17:35 crc kubenswrapper[4482]: I1125 07:17:35.157911 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" event={"ID":"139f24cb-9278-4d08-9a47-773461fa73ad","Type":"ContainerDied","Data":"22a89559ab7b10829a2d9e2dbabe8bc545880e542d81e9fee0329d69f3f8a871"} Nov 25 07:17:36 crc kubenswrapper[4482]: I1125 07:17:36.474574 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:36 crc kubenswrapper[4482]: I1125 07:17:36.498017 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/139f24cb-9278-4d08-9a47-773461fa73ad-inventory\") pod \"139f24cb-9278-4d08-9a47-773461fa73ad\" (UID: \"139f24cb-9278-4d08-9a47-773461fa73ad\") " Nov 25 07:17:36 crc kubenswrapper[4482]: I1125 07:17:36.498080 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/139f24cb-9278-4d08-9a47-773461fa73ad-ssh-key\") pod \"139f24cb-9278-4d08-9a47-773461fa73ad\" (UID: \"139f24cb-9278-4d08-9a47-773461fa73ad\") " Nov 25 07:17:36 crc kubenswrapper[4482]: I1125 07:17:36.498144 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4mxq\" (UniqueName: \"kubernetes.io/projected/139f24cb-9278-4d08-9a47-773461fa73ad-kube-api-access-h4mxq\") pod \"139f24cb-9278-4d08-9a47-773461fa73ad\" (UID: \"139f24cb-9278-4d08-9a47-773461fa73ad\") " Nov 25 07:17:36 crc kubenswrapper[4482]: I1125 07:17:36.503650 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139f24cb-9278-4d08-9a47-773461fa73ad-kube-api-access-h4mxq" (OuterVolumeSpecName: "kube-api-access-h4mxq") pod "139f24cb-9278-4d08-9a47-773461fa73ad" (UID: "139f24cb-9278-4d08-9a47-773461fa73ad"). InnerVolumeSpecName "kube-api-access-h4mxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:17:36 crc kubenswrapper[4482]: I1125 07:17:36.526521 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139f24cb-9278-4d08-9a47-773461fa73ad-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "139f24cb-9278-4d08-9a47-773461fa73ad" (UID: "139f24cb-9278-4d08-9a47-773461fa73ad"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:17:36 crc kubenswrapper[4482]: I1125 07:17:36.530406 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139f24cb-9278-4d08-9a47-773461fa73ad-inventory" (OuterVolumeSpecName: "inventory") pod "139f24cb-9278-4d08-9a47-773461fa73ad" (UID: "139f24cb-9278-4d08-9a47-773461fa73ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:17:36 crc kubenswrapper[4482]: I1125 07:17:36.601389 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/139f24cb-9278-4d08-9a47-773461fa73ad-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:17:36 crc kubenswrapper[4482]: I1125 07:17:36.601510 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/139f24cb-9278-4d08-9a47-773461fa73ad-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:17:36 crc kubenswrapper[4482]: I1125 07:17:36.601563 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4mxq\" (UniqueName: \"kubernetes.io/projected/139f24cb-9278-4d08-9a47-773461fa73ad-kube-api-access-h4mxq\") on node \"crc\" DevicePath \"\"" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.174820 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" event={"ID":"139f24cb-9278-4d08-9a47-773461fa73ad","Type":"ContainerDied","Data":"2fed287e9a9184c55d6a41f291f358fb13cc1accad45128c20470d782739352e"} Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.175030 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fed287e9a9184c55d6a41f291f358fb13cc1accad45128c20470d782739352e" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.174877 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-px86b" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.259822 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52"] Nov 25 07:17:37 crc kubenswrapper[4482]: E1125 07:17:37.260245 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139f24cb-9278-4d08-9a47-773461fa73ad" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.260266 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="139f24cb-9278-4d08-9a47-773461fa73ad" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.260535 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="139f24cb-9278-4d08-9a47-773461fa73ad" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.261151 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.262740 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.263026 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.263157 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.263234 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.275509 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52"] Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.311017 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhkb6\" (UniqueName: \"kubernetes.io/projected/ab2c11c0-3e5b-4f56-aee6-674e3241c393-kube-api-access-hhkb6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-96j52\" (UID: \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.311254 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab2c11c0-3e5b-4f56-aee6-674e3241c393-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-96j52\" (UID: \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.311464 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab2c11c0-3e5b-4f56-aee6-674e3241c393-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-96j52\" (UID: \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.412684 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhkb6\" (UniqueName: \"kubernetes.io/projected/ab2c11c0-3e5b-4f56-aee6-674e3241c393-kube-api-access-hhkb6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-96j52\" (UID: \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.412725 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab2c11c0-3e5b-4f56-aee6-674e3241c393-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-96j52\" (UID: \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.412956 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab2c11c0-3e5b-4f56-aee6-674e3241c393-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-96j52\" (UID: \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.418596 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab2c11c0-3e5b-4f56-aee6-674e3241c393-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-96j52\" (UID: \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.419476 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab2c11c0-3e5b-4f56-aee6-674e3241c393-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-96j52\" (UID: \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.431045 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhkb6\" (UniqueName: \"kubernetes.io/projected/ab2c11c0-3e5b-4f56-aee6-674e3241c393-kube-api-access-hhkb6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-96j52\" (UID: \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:37 crc kubenswrapper[4482]: I1125 07:17:37.584114 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:38 crc kubenswrapper[4482]: I1125 07:17:38.018720 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52"] Nov 25 07:17:38 crc kubenswrapper[4482]: I1125 07:17:38.183212 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" event={"ID":"ab2c11c0-3e5b-4f56-aee6-674e3241c393","Type":"ContainerStarted","Data":"5c2788d344dd8693ec554613be5a915ade4a44dbcb35633f5cc755f4f0dd94a3"} Nov 25 07:17:39 crc kubenswrapper[4482]: I1125 07:17:39.191885 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" event={"ID":"ab2c11c0-3e5b-4f56-aee6-674e3241c393","Type":"ContainerStarted","Data":"30890182d708eb85df6b44c0cf3fb770328317bc37556328631dcb6201a37f10"} Nov 25 07:17:39 crc kubenswrapper[4482]: I1125 07:17:39.830697 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:17:40 crc kubenswrapper[4482]: I1125 07:17:40.218290 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"024965c47687aa00d0ad8db4748dfa0d2b39b80a48007bdb858861dc5eebf7f7"} Nov 25 07:17:40 crc kubenswrapper[4482]: I1125 07:17:40.238562 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" podStartSLOduration=2.720886858 podStartE2EDuration="3.238548129s" podCreationTimestamp="2025-11-25 07:17:37 +0000 UTC" firstStartedPulling="2025-11-25 07:17:38.035027174 +0000 UTC m=+1832.523258433" lastFinishedPulling="2025-11-25 07:17:38.552688445 +0000 UTC m=+1833.040919704" observedRunningTime="2025-11-25 07:17:39.214611414 +0000 UTC m=+1833.702842673" watchObservedRunningTime="2025-11-25 07:17:40.238548129 +0000 UTC m=+1834.726779389" Nov 25 07:17:46 crc kubenswrapper[4482]: I1125 07:17:46.261946 4482 generic.go:334] "Generic (PLEG): container finished" podID="ab2c11c0-3e5b-4f56-aee6-674e3241c393" containerID="30890182d708eb85df6b44c0cf3fb770328317bc37556328631dcb6201a37f10" exitCode=0 Nov 25 07:17:46 crc kubenswrapper[4482]: I1125 07:17:46.262023 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" event={"ID":"ab2c11c0-3e5b-4f56-aee6-674e3241c393","Type":"ContainerDied","Data":"30890182d708eb85df6b44c0cf3fb770328317bc37556328631dcb6201a37f10"} Nov 25 07:17:47 crc kubenswrapper[4482]: I1125 07:17:47.578556 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:47 crc kubenswrapper[4482]: I1125 07:17:47.681060 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhkb6\" (UniqueName: \"kubernetes.io/projected/ab2c11c0-3e5b-4f56-aee6-674e3241c393-kube-api-access-hhkb6\") pod \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\" (UID: \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\") " Nov 25 07:17:47 crc kubenswrapper[4482]: I1125 07:17:47.681182 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab2c11c0-3e5b-4f56-aee6-674e3241c393-inventory\") pod \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\" (UID: \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\") " Nov 25 07:17:47 crc kubenswrapper[4482]: I1125 07:17:47.681289 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab2c11c0-3e5b-4f56-aee6-674e3241c393-ssh-key\") pod \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\" (UID: \"ab2c11c0-3e5b-4f56-aee6-674e3241c393\") " Nov 25 07:17:47 crc kubenswrapper[4482]: I1125 07:17:47.686266 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2c11c0-3e5b-4f56-aee6-674e3241c393-kube-api-access-hhkb6" (OuterVolumeSpecName: "kube-api-access-hhkb6") pod "ab2c11c0-3e5b-4f56-aee6-674e3241c393" (UID: "ab2c11c0-3e5b-4f56-aee6-674e3241c393"). InnerVolumeSpecName "kube-api-access-hhkb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:17:47 crc kubenswrapper[4482]: I1125 07:17:47.703750 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2c11c0-3e5b-4f56-aee6-674e3241c393-inventory" (OuterVolumeSpecName: "inventory") pod "ab2c11c0-3e5b-4f56-aee6-674e3241c393" (UID: "ab2c11c0-3e5b-4f56-aee6-674e3241c393"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:17:47 crc kubenswrapper[4482]: I1125 07:17:47.705129 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2c11c0-3e5b-4f56-aee6-674e3241c393-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ab2c11c0-3e5b-4f56-aee6-674e3241c393" (UID: "ab2c11c0-3e5b-4f56-aee6-674e3241c393"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:17:47 crc kubenswrapper[4482]: I1125 07:17:47.783643 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhkb6\" (UniqueName: \"kubernetes.io/projected/ab2c11c0-3e5b-4f56-aee6-674e3241c393-kube-api-access-hhkb6\") on node \"crc\" DevicePath \"\"" Nov 25 07:17:47 crc kubenswrapper[4482]: I1125 07:17:47.783944 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab2c11c0-3e5b-4f56-aee6-674e3241c393-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:17:47 crc kubenswrapper[4482]: I1125 07:17:47.784003 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab2c11c0-3e5b-4f56-aee6-674e3241c393-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.277730 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" event={"ID":"ab2c11c0-3e5b-4f56-aee6-674e3241c393","Type":"ContainerDied","Data":"5c2788d344dd8693ec554613be5a915ade4a44dbcb35633f5cc755f4f0dd94a3"} Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.277786 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c2788d344dd8693ec554613be5a915ade4a44dbcb35633f5cc755f4f0dd94a3" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.277751 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-96j52" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.354757 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6"] Nov 25 07:17:48 crc kubenswrapper[4482]: E1125 07:17:48.355493 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2c11c0-3e5b-4f56-aee6-674e3241c393" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.355624 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2c11c0-3e5b-4f56-aee6-674e3241c393" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.355901 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2c11c0-3e5b-4f56-aee6-674e3241c393" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.356702 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.361679 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.361843 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.361983 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.362098 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.362220 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.362303 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.362472 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.362579 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.372384 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6"] Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.495588 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.495673 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.495792 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.495831 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.495861 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.495902 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.495969 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.496041 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vzcs\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-kube-api-access-5vzcs\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.496063 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.496125 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.496257 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.496405 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.496454 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.496527 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.598612 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.598673 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.598712 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.598743 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vzcs\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-kube-api-access-5vzcs\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.598762 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.598791 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.598825 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.598875 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.598893 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.598915 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.598956 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.598989 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.599019 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.599036 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.603443 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.603598 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.604016 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.604450 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.606104 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.606300 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.606459 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.606829 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.607522 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.607567 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.607628 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.607788 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.608278 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.615947 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vzcs\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-kube-api-access-5vzcs\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mjls6\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:48 crc kubenswrapper[4482]: I1125 07:17:48.675519 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:17:49 crc kubenswrapper[4482]: I1125 07:17:49.146395 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6"] Nov 25 07:17:49 crc kubenswrapper[4482]: W1125 07:17:49.147159 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33ea8ce8_2954_4332_861d_611e2d8cc588.slice/crio-58cfd4a21b66af99db3c76f858f2a5445d0452145b237b45e8bd740d969a800c WatchSource:0}: Error finding container 58cfd4a21b66af99db3c76f858f2a5445d0452145b237b45e8bd740d969a800c: Status 404 returned error can't find the container with id 58cfd4a21b66af99db3c76f858f2a5445d0452145b237b45e8bd740d969a800c Nov 25 07:17:49 crc kubenswrapper[4482]: I1125 07:17:49.286448 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" event={"ID":"33ea8ce8-2954-4332-861d-611e2d8cc588","Type":"ContainerStarted","Data":"58cfd4a21b66af99db3c76f858f2a5445d0452145b237b45e8bd740d969a800c"} Nov 25 07:17:50 crc kubenswrapper[4482]: I1125 07:17:50.295142 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" event={"ID":"33ea8ce8-2954-4332-861d-611e2d8cc588","Type":"ContainerStarted","Data":"832f6172f95d39cbd986c5cfce535b9c0085d62db7e11b605bbb55a822026d60"} Nov 25 07:17:50 crc kubenswrapper[4482]: I1125 07:17:50.310444 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" podStartSLOduration=1.713719695 podStartE2EDuration="2.310434466s" podCreationTimestamp="2025-11-25 07:17:48 +0000 UTC" firstStartedPulling="2025-11-25 07:17:49.149076058 +0000 UTC m=+1843.637307317" lastFinishedPulling="2025-11-25 07:17:49.745790829 +0000 UTC m=+1844.234022088" observedRunningTime="2025-11-25 07:17:50.308205533 +0000 UTC m=+1844.796436793" watchObservedRunningTime="2025-11-25 07:17:50.310434466 +0000 UTC m=+1844.798665725" Nov 25 07:18:17 crc kubenswrapper[4482]: I1125 07:18:17.503955 4482 generic.go:334] "Generic (PLEG): container finished" podID="33ea8ce8-2954-4332-861d-611e2d8cc588" containerID="832f6172f95d39cbd986c5cfce535b9c0085d62db7e11b605bbb55a822026d60" exitCode=0 Nov 25 07:18:17 crc kubenswrapper[4482]: I1125 07:18:17.504006 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" event={"ID":"33ea8ce8-2954-4332-861d-611e2d8cc588","Type":"ContainerDied","Data":"832f6172f95d39cbd986c5cfce535b9c0085d62db7e11b605bbb55a822026d60"} Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.830290 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939149 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-inventory\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939238 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vzcs\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-kube-api-access-5vzcs\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939300 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-bootstrap-combined-ca-bundle\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939319 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-repo-setup-combined-ca-bundle\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939339 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-ovn-default-certs-0\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939378 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-libvirt-combined-ca-bundle\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939445 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-ovn-combined-ca-bundle\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939532 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939572 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-ssh-key\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939600 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-telemetry-combined-ca-bundle\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939618 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939638 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939668 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-nova-combined-ca-bundle\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.939685 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-neutron-metadata-combined-ca-bundle\") pod \"33ea8ce8-2954-4332-861d-611e2d8cc588\" (UID: \"33ea8ce8-2954-4332-861d-611e2d8cc588\") " Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.947086 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.948690 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.948842 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.948954 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.950501 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.951517 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.951544 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.952206 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.952869 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-kube-api-access-5vzcs" (OuterVolumeSpecName: "kube-api-access-5vzcs") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "kube-api-access-5vzcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.956614 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.956646 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.971694 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.972335 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[4482]: I1125 07:18:18.980433 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-inventory" (OuterVolumeSpecName: "inventory") pod "33ea8ce8-2954-4332-861d-611e2d8cc588" (UID: "33ea8ce8-2954-4332-861d-611e2d8cc588"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041552 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041574 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vzcs\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-kube-api-access-5vzcs\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041586 4482 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041595 4482 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041604 4482 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041614 4482 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041622 4482 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041630 4482 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041639 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041647 4482 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041655 4482 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041663 4482 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/33ea8ce8-2954-4332-861d-611e2d8cc588-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041671 4482 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.041680 4482 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ea8ce8-2954-4332-861d-611e2d8cc588-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.520004 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" event={"ID":"33ea8ce8-2954-4332-861d-611e2d8cc588","Type":"ContainerDied","Data":"58cfd4a21b66af99db3c76f858f2a5445d0452145b237b45e8bd740d969a800c"} Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.520292 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58cfd4a21b66af99db3c76f858f2a5445d0452145b237b45e8bd740d969a800c" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.520061 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mjls6" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.602724 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n"] Nov 25 07:18:19 crc kubenswrapper[4482]: E1125 07:18:19.603156 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ea8ce8-2954-4332-861d-611e2d8cc588" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.603184 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ea8ce8-2954-4332-861d-611e2d8cc588" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.603363 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ea8ce8-2954-4332-861d-611e2d8cc588" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.604002 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.605962 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.606050 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.606305 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.606756 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.606899 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.608101 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n"] Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.649707 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.649765 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbtx2\" (UniqueName: \"kubernetes.io/projected/08247618-2eaf-468e-a857-e6fb71d2a5f0-kube-api-access-nbtx2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.649840 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/08247618-2eaf-468e-a857-e6fb71d2a5f0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.650253 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.650298 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.751724 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.751765 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.751795 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.751833 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbtx2\" (UniqueName: \"kubernetes.io/projected/08247618-2eaf-468e-a857-e6fb71d2a5f0-kube-api-access-nbtx2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.751859 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/08247618-2eaf-468e-a857-e6fb71d2a5f0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.752668 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/08247618-2eaf-468e-a857-e6fb71d2a5f0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.755413 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.755706 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.756782 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.765270 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbtx2\" (UniqueName: \"kubernetes.io/projected/08247618-2eaf-468e-a857-e6fb71d2a5f0-kube-api-access-nbtx2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-spb9n\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:19 crc kubenswrapper[4482]: I1125 07:18:19.919663 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:18:20 crc kubenswrapper[4482]: I1125 07:18:20.339800 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n"] Nov 25 07:18:20 crc kubenswrapper[4482]: I1125 07:18:20.528383 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" event={"ID":"08247618-2eaf-468e-a857-e6fb71d2a5f0","Type":"ContainerStarted","Data":"de22b8d84cd0709bc75da6999413e9249e55f5478bad6ef991747e9b9caa9389"} Nov 25 07:18:21 crc kubenswrapper[4482]: I1125 07:18:21.545854 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" event={"ID":"08247618-2eaf-468e-a857-e6fb71d2a5f0","Type":"ContainerStarted","Data":"9bc00a35907df6bc0cca328a9b7669fdd268388ef87b5963550e791dcac6f8cc"} Nov 25 07:18:21 crc kubenswrapper[4482]: I1125 07:18:21.564712 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" podStartSLOduration=2.032042823 podStartE2EDuration="2.564697191s" podCreationTimestamp="2025-11-25 07:18:19 +0000 UTC" firstStartedPulling="2025-11-25 07:18:20.349448864 +0000 UTC m=+1874.837680123" lastFinishedPulling="2025-11-25 07:18:20.882103232 +0000 UTC m=+1875.370334491" observedRunningTime="2025-11-25 07:18:21.557404152 +0000 UTC m=+1876.045635411" watchObservedRunningTime="2025-11-25 07:18:21.564697191 +0000 UTC m=+1876.052928450" Nov 25 07:18:39 crc kubenswrapper[4482]: I1125 07:18:39.806208 4482 scope.go:117] "RemoveContainer" containerID="b308e6b69a893dd4eb099274b910226855b3fd7a6454936464d8a5b56908738f" Nov 25 07:18:39 crc kubenswrapper[4482]: I1125 07:18:39.823750 4482 scope.go:117] "RemoveContainer" containerID="26f26af5b08f474d119ca1b784e655dbaf76e5a8ed034aad44e3f07d68278749" Nov 25 07:18:39 crc kubenswrapper[4482]: I1125 07:18:39.842737 4482 scope.go:117] "RemoveContainer" containerID="3ff0602cab30e0f5e7b0effbcf1d20d1bf707782c6021346e269605d239b012a" Nov 25 07:19:04 crc kubenswrapper[4482]: I1125 07:19:04.873562 4482 generic.go:334] "Generic (PLEG): container finished" podID="08247618-2eaf-468e-a857-e6fb71d2a5f0" containerID="9bc00a35907df6bc0cca328a9b7669fdd268388ef87b5963550e791dcac6f8cc" exitCode=0 Nov 25 07:19:04 crc kubenswrapper[4482]: I1125 07:19:04.873634 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" event={"ID":"08247618-2eaf-468e-a857-e6fb71d2a5f0","Type":"ContainerDied","Data":"9bc00a35907df6bc0cca328a9b7669fdd268388ef87b5963550e791dcac6f8cc"} Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.182363 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.360024 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-inventory\") pod \"08247618-2eaf-468e-a857-e6fb71d2a5f0\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.360307 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/08247618-2eaf-468e-a857-e6fb71d2a5f0-ovncontroller-config-0\") pod \"08247618-2eaf-468e-a857-e6fb71d2a5f0\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.360330 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-ovn-combined-ca-bundle\") pod \"08247618-2eaf-468e-a857-e6fb71d2a5f0\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.360345 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-ssh-key\") pod \"08247618-2eaf-468e-a857-e6fb71d2a5f0\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.360430 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbtx2\" (UniqueName: \"kubernetes.io/projected/08247618-2eaf-468e-a857-e6fb71d2a5f0-kube-api-access-nbtx2\") pod \"08247618-2eaf-468e-a857-e6fb71d2a5f0\" (UID: \"08247618-2eaf-468e-a857-e6fb71d2a5f0\") " Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.364319 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "08247618-2eaf-468e-a857-e6fb71d2a5f0" (UID: "08247618-2eaf-468e-a857-e6fb71d2a5f0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.365237 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08247618-2eaf-468e-a857-e6fb71d2a5f0-kube-api-access-nbtx2" (OuterVolumeSpecName: "kube-api-access-nbtx2") pod "08247618-2eaf-468e-a857-e6fb71d2a5f0" (UID: "08247618-2eaf-468e-a857-e6fb71d2a5f0"). InnerVolumeSpecName "kube-api-access-nbtx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.378625 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08247618-2eaf-468e-a857-e6fb71d2a5f0-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "08247618-2eaf-468e-a857-e6fb71d2a5f0" (UID: "08247618-2eaf-468e-a857-e6fb71d2a5f0"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.382420 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "08247618-2eaf-468e-a857-e6fb71d2a5f0" (UID: "08247618-2eaf-468e-a857-e6fb71d2a5f0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.383144 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-inventory" (OuterVolumeSpecName: "inventory") pod "08247618-2eaf-468e-a857-e6fb71d2a5f0" (UID: "08247618-2eaf-468e-a857-e6fb71d2a5f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.462695 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbtx2\" (UniqueName: \"kubernetes.io/projected/08247618-2eaf-468e-a857-e6fb71d2a5f0-kube-api-access-nbtx2\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.462722 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.462733 4482 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/08247618-2eaf-468e-a857-e6fb71d2a5f0-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.462743 4482 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.462751 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08247618-2eaf-468e-a857-e6fb71d2a5f0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.887939 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" event={"ID":"08247618-2eaf-468e-a857-e6fb71d2a5f0","Type":"ContainerDied","Data":"de22b8d84cd0709bc75da6999413e9249e55f5478bad6ef991747e9b9caa9389"} Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.887975 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de22b8d84cd0709bc75da6999413e9249e55f5478bad6ef991747e9b9caa9389" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.887987 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-spb9n" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.967999 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb"] Nov 25 07:19:06 crc kubenswrapper[4482]: E1125 07:19:06.968499 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08247618-2eaf-468e-a857-e6fb71d2a5f0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.968517 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="08247618-2eaf-468e-a857-e6fb71d2a5f0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.968722 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="08247618-2eaf-468e-a857-e6fb71d2a5f0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.969393 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.972635 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.972802 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.972941 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.973067 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.974103 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.974792 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:19:06 crc kubenswrapper[4482]: I1125 07:19:06.994581 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb"] Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.071184 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.071254 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.071431 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfqll\" (UniqueName: \"kubernetes.io/projected/d039b02e-3917-489b-94c0-71191f8a3e55-kube-api-access-dfqll\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.071548 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.071738 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.071813 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.172862 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfqll\" (UniqueName: \"kubernetes.io/projected/d039b02e-3917-489b-94c0-71191f8a3e55-kube-api-access-dfqll\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.172932 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.173032 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.173118 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.173582 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.173633 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.177776 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.177776 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.177895 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.179075 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.179180 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.187741 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfqll\" (UniqueName: \"kubernetes.io/projected/d039b02e-3917-489b-94c0-71191f8a3e55-kube-api-access-dfqll\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.282289 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.698743 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb"] Nov 25 07:19:07 crc kubenswrapper[4482]: I1125 07:19:07.895591 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" event={"ID":"d039b02e-3917-489b-94c0-71191f8a3e55","Type":"ContainerStarted","Data":"4e3f80337e716e20c60bc72a42029af24bff21aea35831332bf24015881320c9"} Nov 25 07:19:08 crc kubenswrapper[4482]: I1125 07:19:08.904624 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" event={"ID":"d039b02e-3917-489b-94c0-71191f8a3e55","Type":"ContainerStarted","Data":"61d997288abc28d8f6c66a8a1591750120a2a4ee433ee4f062d46a06f40ee08c"} Nov 25 07:19:08 crc kubenswrapper[4482]: I1125 07:19:08.918973 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" podStartSLOduration=2.372382531 podStartE2EDuration="2.918960863s" podCreationTimestamp="2025-11-25 07:19:06 +0000 UTC" firstStartedPulling="2025-11-25 07:19:07.700802734 +0000 UTC m=+1922.189033993" lastFinishedPulling="2025-11-25 07:19:08.247381065 +0000 UTC m=+1922.735612325" observedRunningTime="2025-11-25 07:19:08.914584803 +0000 UTC m=+1923.402816062" watchObservedRunningTime="2025-11-25 07:19:08.918960863 +0000 UTC m=+1923.407192122" Nov 25 07:19:39 crc kubenswrapper[4482]: I1125 07:19:39.117984 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:19:39 crc kubenswrapper[4482]: I1125 07:19:39.118451 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:19:42 crc kubenswrapper[4482]: I1125 07:19:42.131758 4482 generic.go:334] "Generic (PLEG): container finished" podID="d039b02e-3917-489b-94c0-71191f8a3e55" containerID="61d997288abc28d8f6c66a8a1591750120a2a4ee433ee4f062d46a06f40ee08c" exitCode=0 Nov 25 07:19:42 crc kubenswrapper[4482]: I1125 07:19:42.131858 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" event={"ID":"d039b02e-3917-489b-94c0-71191f8a3e55","Type":"ContainerDied","Data":"61d997288abc28d8f6c66a8a1591750120a2a4ee433ee4f062d46a06f40ee08c"} Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.448924 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.634732 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-neutron-metadata-combined-ca-bundle\") pod \"d039b02e-3917-489b-94c0-71191f8a3e55\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.634775 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfqll\" (UniqueName: \"kubernetes.io/projected/d039b02e-3917-489b-94c0-71191f8a3e55-kube-api-access-dfqll\") pod \"d039b02e-3917-489b-94c0-71191f8a3e55\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.634803 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-inventory\") pod \"d039b02e-3917-489b-94c0-71191f8a3e55\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.634848 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-neutron-ovn-metadata-agent-neutron-config-0\") pod \"d039b02e-3917-489b-94c0-71191f8a3e55\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.634878 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-nova-metadata-neutron-config-0\") pod \"d039b02e-3917-489b-94c0-71191f8a3e55\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.634904 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-ssh-key\") pod \"d039b02e-3917-489b-94c0-71191f8a3e55\" (UID: \"d039b02e-3917-489b-94c0-71191f8a3e55\") " Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.651292 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d039b02e-3917-489b-94c0-71191f8a3e55" (UID: "d039b02e-3917-489b-94c0-71191f8a3e55"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.651323 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d039b02e-3917-489b-94c0-71191f8a3e55-kube-api-access-dfqll" (OuterVolumeSpecName: "kube-api-access-dfqll") pod "d039b02e-3917-489b-94c0-71191f8a3e55" (UID: "d039b02e-3917-489b-94c0-71191f8a3e55"). InnerVolumeSpecName "kube-api-access-dfqll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.656557 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "d039b02e-3917-489b-94c0-71191f8a3e55" (UID: "d039b02e-3917-489b-94c0-71191f8a3e55"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.656938 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d039b02e-3917-489b-94c0-71191f8a3e55" (UID: "d039b02e-3917-489b-94c0-71191f8a3e55"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.657149 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-inventory" (OuterVolumeSpecName: "inventory") pod "d039b02e-3917-489b-94c0-71191f8a3e55" (UID: "d039b02e-3917-489b-94c0-71191f8a3e55"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.661979 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "d039b02e-3917-489b-94c0-71191f8a3e55" (UID: "d039b02e-3917-489b-94c0-71191f8a3e55"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.736987 4482 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.737013 4482 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.737024 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.737050 4482 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.737061 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfqll\" (UniqueName: \"kubernetes.io/projected/d039b02e-3917-489b-94c0-71191f8a3e55-kube-api-access-dfqll\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:43 crc kubenswrapper[4482]: I1125 07:19:43.737069 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d039b02e-3917-489b-94c0-71191f8a3e55-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.147658 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" event={"ID":"d039b02e-3917-489b-94c0-71191f8a3e55","Type":"ContainerDied","Data":"4e3f80337e716e20c60bc72a42029af24bff21aea35831332bf24015881320c9"} Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.147694 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e3f80337e716e20c60bc72a42029af24bff21aea35831332bf24015881320c9" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.147696 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hl6jb" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.213119 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw"] Nov 25 07:19:44 crc kubenswrapper[4482]: E1125 07:19:44.213775 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d039b02e-3917-489b-94c0-71191f8a3e55" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.213842 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="d039b02e-3917-489b-94c0-71191f8a3e55" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.214062 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="d039b02e-3917-489b-94c0-71191f8a3e55" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.214706 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.216648 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.216982 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.217987 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.222923 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw"] Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.223561 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.224680 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.245396 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.245466 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.245590 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.245618 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.245649 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h87gk\" (UniqueName: \"kubernetes.io/projected/59757a63-f30a-473f-b02f-55545e2303ff-kube-api-access-h87gk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.346773 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.346844 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.346870 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.346899 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h87gk\" (UniqueName: \"kubernetes.io/projected/59757a63-f30a-473f-b02f-55545e2303ff-kube-api-access-h87gk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.346946 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.350112 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.350195 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.350681 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.350920 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.360211 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h87gk\" (UniqueName: \"kubernetes.io/projected/59757a63-f30a-473f-b02f-55545e2303ff-kube-api-access-h87gk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.527022 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:19:44 crc kubenswrapper[4482]: I1125 07:19:44.947287 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw"] Nov 25 07:19:45 crc kubenswrapper[4482]: I1125 07:19:45.155455 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" event={"ID":"59757a63-f30a-473f-b02f-55545e2303ff","Type":"ContainerStarted","Data":"9cf670a7b17a3ec54656ac5b205788632ff7f8985d2bc581c83703d9b403aad7"} Nov 25 07:19:46 crc kubenswrapper[4482]: I1125 07:19:46.162984 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" event={"ID":"59757a63-f30a-473f-b02f-55545e2303ff","Type":"ContainerStarted","Data":"f1246569104a449178135d8749c7e9d89a36bcc1267b25d496ae8bca422dfefb"} Nov 25 07:19:46 crc kubenswrapper[4482]: I1125 07:19:46.183322 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" podStartSLOduration=1.641870844 podStartE2EDuration="2.18330575s" podCreationTimestamp="2025-11-25 07:19:44 +0000 UTC" firstStartedPulling="2025-11-25 07:19:44.950656903 +0000 UTC m=+1959.438888163" lastFinishedPulling="2025-11-25 07:19:45.49209181 +0000 UTC m=+1959.980323069" observedRunningTime="2025-11-25 07:19:46.174467559 +0000 UTC m=+1960.662698817" watchObservedRunningTime="2025-11-25 07:19:46.18330575 +0000 UTC m=+1960.671537009" Nov 25 07:20:09 crc kubenswrapper[4482]: I1125 07:20:09.118318 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:20:09 crc kubenswrapper[4482]: I1125 07:20:09.118928 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:20:39 crc kubenswrapper[4482]: I1125 07:20:39.118088 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:20:39 crc kubenswrapper[4482]: I1125 07:20:39.118460 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:20:39 crc kubenswrapper[4482]: I1125 07:20:39.118495 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 07:20:39 crc kubenswrapper[4482]: I1125 07:20:39.118982 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"024965c47687aa00d0ad8db4748dfa0d2b39b80a48007bdb858861dc5eebf7f7"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:20:39 crc kubenswrapper[4482]: I1125 07:20:39.119028 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://024965c47687aa00d0ad8db4748dfa0d2b39b80a48007bdb858861dc5eebf7f7" gracePeriod=600 Nov 25 07:20:39 crc kubenswrapper[4482]: I1125 07:20:39.550788 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="024965c47687aa00d0ad8db4748dfa0d2b39b80a48007bdb858861dc5eebf7f7" exitCode=0 Nov 25 07:20:39 crc kubenswrapper[4482]: I1125 07:20:39.550848 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"024965c47687aa00d0ad8db4748dfa0d2b39b80a48007bdb858861dc5eebf7f7"} Nov 25 07:20:39 crc kubenswrapper[4482]: I1125 07:20:39.551222 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a"} Nov 25 07:20:39 crc kubenswrapper[4482]: I1125 07:20:39.551306 4482 scope.go:117] "RemoveContainer" containerID="d2c9c664f3d458430b85e7bc09127dc8eb16093243ff384eafc3ae7d3281ae77" Nov 25 07:22:15 crc kubenswrapper[4482]: I1125 07:22:15.951885 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4twk2"] Nov 25 07:22:15 crc kubenswrapper[4482]: I1125 07:22:15.954632 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:15 crc kubenswrapper[4482]: I1125 07:22:15.989794 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4twk2"] Nov 25 07:22:16 crc kubenswrapper[4482]: I1125 07:22:16.034641 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfsrw\" (UniqueName: \"kubernetes.io/projected/ded0f019-e4aa-4632-b88b-436b83ea4db3-kube-api-access-rfsrw\") pod \"redhat-marketplace-4twk2\" (UID: \"ded0f019-e4aa-4632-b88b-436b83ea4db3\") " pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:16 crc kubenswrapper[4482]: I1125 07:22:16.034694 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded0f019-e4aa-4632-b88b-436b83ea4db3-catalog-content\") pod \"redhat-marketplace-4twk2\" (UID: \"ded0f019-e4aa-4632-b88b-436b83ea4db3\") " pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:16 crc kubenswrapper[4482]: I1125 07:22:16.034716 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded0f019-e4aa-4632-b88b-436b83ea4db3-utilities\") pod \"redhat-marketplace-4twk2\" (UID: \"ded0f019-e4aa-4632-b88b-436b83ea4db3\") " pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:16 crc kubenswrapper[4482]: I1125 07:22:16.136328 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfsrw\" (UniqueName: \"kubernetes.io/projected/ded0f019-e4aa-4632-b88b-436b83ea4db3-kube-api-access-rfsrw\") pod \"redhat-marketplace-4twk2\" (UID: \"ded0f019-e4aa-4632-b88b-436b83ea4db3\") " pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:16 crc kubenswrapper[4482]: I1125 07:22:16.136396 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded0f019-e4aa-4632-b88b-436b83ea4db3-catalog-content\") pod \"redhat-marketplace-4twk2\" (UID: \"ded0f019-e4aa-4632-b88b-436b83ea4db3\") " pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:16 crc kubenswrapper[4482]: I1125 07:22:16.136422 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded0f019-e4aa-4632-b88b-436b83ea4db3-utilities\") pod \"redhat-marketplace-4twk2\" (UID: \"ded0f019-e4aa-4632-b88b-436b83ea4db3\") " pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:16 crc kubenswrapper[4482]: I1125 07:22:16.136849 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded0f019-e4aa-4632-b88b-436b83ea4db3-catalog-content\") pod \"redhat-marketplace-4twk2\" (UID: \"ded0f019-e4aa-4632-b88b-436b83ea4db3\") " pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:16 crc kubenswrapper[4482]: I1125 07:22:16.136917 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded0f019-e4aa-4632-b88b-436b83ea4db3-utilities\") pod \"redhat-marketplace-4twk2\" (UID: \"ded0f019-e4aa-4632-b88b-436b83ea4db3\") " pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:16 crc kubenswrapper[4482]: I1125 07:22:16.155499 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfsrw\" (UniqueName: \"kubernetes.io/projected/ded0f019-e4aa-4632-b88b-436b83ea4db3-kube-api-access-rfsrw\") pod \"redhat-marketplace-4twk2\" (UID: \"ded0f019-e4aa-4632-b88b-436b83ea4db3\") " pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:16 crc kubenswrapper[4482]: I1125 07:22:16.287091 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:16 crc kubenswrapper[4482]: I1125 07:22:16.708496 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4twk2"] Nov 25 07:22:17 crc kubenswrapper[4482]: I1125 07:22:17.169124 4482 generic.go:334] "Generic (PLEG): container finished" podID="ded0f019-e4aa-4632-b88b-436b83ea4db3" containerID="613e03b38cb18cc968d32a8346bf371a52f3544b6944b8b9bb4a059cf93e9c18" exitCode=0 Nov 25 07:22:17 crc kubenswrapper[4482]: I1125 07:22:17.169207 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4twk2" event={"ID":"ded0f019-e4aa-4632-b88b-436b83ea4db3","Type":"ContainerDied","Data":"613e03b38cb18cc968d32a8346bf371a52f3544b6944b8b9bb4a059cf93e9c18"} Nov 25 07:22:17 crc kubenswrapper[4482]: I1125 07:22:17.169510 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4twk2" event={"ID":"ded0f019-e4aa-4632-b88b-436b83ea4db3","Type":"ContainerStarted","Data":"6e9c7181dfb723f54ba2375b6d4eb22b531394b0d3b704db389e02ac5ed20c05"} Nov 25 07:22:17 crc kubenswrapper[4482]: I1125 07:22:17.171149 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:22:18 crc kubenswrapper[4482]: I1125 07:22:18.178355 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4twk2" event={"ID":"ded0f019-e4aa-4632-b88b-436b83ea4db3","Type":"ContainerStarted","Data":"cca0856d2a99c1f1fdcbf562de7ae85e48a01a4eeccbd31c6acda527e34768a8"} Nov 25 07:22:19 crc kubenswrapper[4482]: I1125 07:22:19.192707 4482 generic.go:334] "Generic (PLEG): container finished" podID="ded0f019-e4aa-4632-b88b-436b83ea4db3" containerID="cca0856d2a99c1f1fdcbf562de7ae85e48a01a4eeccbd31c6acda527e34768a8" exitCode=0 Nov 25 07:22:19 crc kubenswrapper[4482]: I1125 07:22:19.193193 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4twk2" event={"ID":"ded0f019-e4aa-4632-b88b-436b83ea4db3","Type":"ContainerDied","Data":"cca0856d2a99c1f1fdcbf562de7ae85e48a01a4eeccbd31c6acda527e34768a8"} Nov 25 07:22:20 crc kubenswrapper[4482]: I1125 07:22:20.204619 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4twk2" event={"ID":"ded0f019-e4aa-4632-b88b-436b83ea4db3","Type":"ContainerStarted","Data":"4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3"} Nov 25 07:22:20 crc kubenswrapper[4482]: I1125 07:22:20.223470 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4twk2" podStartSLOduration=2.664914902 podStartE2EDuration="5.223452573s" podCreationTimestamp="2025-11-25 07:22:15 +0000 UTC" firstStartedPulling="2025-11-25 07:22:17.170879281 +0000 UTC m=+2111.659110539" lastFinishedPulling="2025-11-25 07:22:19.729416951 +0000 UTC m=+2114.217648210" observedRunningTime="2025-11-25 07:22:20.223029466 +0000 UTC m=+2114.711260724" watchObservedRunningTime="2025-11-25 07:22:20.223452573 +0000 UTC m=+2114.711683833" Nov 25 07:22:26 crc kubenswrapper[4482]: I1125 07:22:26.288025 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:26 crc kubenswrapper[4482]: I1125 07:22:26.288441 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:26 crc kubenswrapper[4482]: I1125 07:22:26.324347 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:27 crc kubenswrapper[4482]: I1125 07:22:27.281250 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:27 crc kubenswrapper[4482]: I1125 07:22:27.313643 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4twk2"] Nov 25 07:22:29 crc kubenswrapper[4482]: I1125 07:22:29.260899 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4twk2" podUID="ded0f019-e4aa-4632-b88b-436b83ea4db3" containerName="registry-server" containerID="cri-o://4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3" gracePeriod=2 Nov 25 07:22:29 crc kubenswrapper[4482]: I1125 07:22:29.638665 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:29 crc kubenswrapper[4482]: I1125 07:22:29.773561 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded0f019-e4aa-4632-b88b-436b83ea4db3-catalog-content\") pod \"ded0f019-e4aa-4632-b88b-436b83ea4db3\" (UID: \"ded0f019-e4aa-4632-b88b-436b83ea4db3\") " Nov 25 07:22:29 crc kubenswrapper[4482]: I1125 07:22:29.773672 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded0f019-e4aa-4632-b88b-436b83ea4db3-utilities\") pod \"ded0f019-e4aa-4632-b88b-436b83ea4db3\" (UID: \"ded0f019-e4aa-4632-b88b-436b83ea4db3\") " Nov 25 07:22:29 crc kubenswrapper[4482]: I1125 07:22:29.773745 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfsrw\" (UniqueName: \"kubernetes.io/projected/ded0f019-e4aa-4632-b88b-436b83ea4db3-kube-api-access-rfsrw\") pod \"ded0f019-e4aa-4632-b88b-436b83ea4db3\" (UID: \"ded0f019-e4aa-4632-b88b-436b83ea4db3\") " Nov 25 07:22:29 crc kubenswrapper[4482]: I1125 07:22:29.774209 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ded0f019-e4aa-4632-b88b-436b83ea4db3-utilities" (OuterVolumeSpecName: "utilities") pod "ded0f019-e4aa-4632-b88b-436b83ea4db3" (UID: "ded0f019-e4aa-4632-b88b-436b83ea4db3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:22:29 crc kubenswrapper[4482]: I1125 07:22:29.777616 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded0f019-e4aa-4632-b88b-436b83ea4db3-kube-api-access-rfsrw" (OuterVolumeSpecName: "kube-api-access-rfsrw") pod "ded0f019-e4aa-4632-b88b-436b83ea4db3" (UID: "ded0f019-e4aa-4632-b88b-436b83ea4db3"). InnerVolumeSpecName "kube-api-access-rfsrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:22:29 crc kubenswrapper[4482]: I1125 07:22:29.784971 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ded0f019-e4aa-4632-b88b-436b83ea4db3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ded0f019-e4aa-4632-b88b-436b83ea4db3" (UID: "ded0f019-e4aa-4632-b88b-436b83ea4db3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:22:29 crc kubenswrapper[4482]: I1125 07:22:29.875667 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded0f019-e4aa-4632-b88b-436b83ea4db3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:22:29 crc kubenswrapper[4482]: I1125 07:22:29.875694 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfsrw\" (UniqueName: \"kubernetes.io/projected/ded0f019-e4aa-4632-b88b-436b83ea4db3-kube-api-access-rfsrw\") on node \"crc\" DevicePath \"\"" Nov 25 07:22:29 crc kubenswrapper[4482]: I1125 07:22:29.875704 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded0f019-e4aa-4632-b88b-436b83ea4db3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.268356 4482 generic.go:334] "Generic (PLEG): container finished" podID="ded0f019-e4aa-4632-b88b-436b83ea4db3" containerID="4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3" exitCode=0 Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.268406 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4twk2" Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.268422 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4twk2" event={"ID":"ded0f019-e4aa-4632-b88b-436b83ea4db3","Type":"ContainerDied","Data":"4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3"} Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.268676 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4twk2" event={"ID":"ded0f019-e4aa-4632-b88b-436b83ea4db3","Type":"ContainerDied","Data":"6e9c7181dfb723f54ba2375b6d4eb22b531394b0d3b704db389e02ac5ed20c05"} Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.268693 4482 scope.go:117] "RemoveContainer" containerID="4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3" Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.285697 4482 scope.go:117] "RemoveContainer" containerID="cca0856d2a99c1f1fdcbf562de7ae85e48a01a4eeccbd31c6acda527e34768a8" Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.286757 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4twk2"] Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.299106 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4twk2"] Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.302940 4482 scope.go:117] "RemoveContainer" containerID="613e03b38cb18cc968d32a8346bf371a52f3544b6944b8b9bb4a059cf93e9c18" Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.334547 4482 scope.go:117] "RemoveContainer" containerID="4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3" Nov 25 07:22:30 crc kubenswrapper[4482]: E1125 07:22:30.334820 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3\": container with ID starting with 4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3 not found: ID does not exist" containerID="4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3" Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.334850 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3"} err="failed to get container status \"4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3\": rpc error: code = NotFound desc = could not find container \"4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3\": container with ID starting with 4894dfdadd092aafdcef6638ce3eab4d27208ff64dc1a3f92539d3a03e36ade3 not found: ID does not exist" Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.334880 4482 scope.go:117] "RemoveContainer" containerID="cca0856d2a99c1f1fdcbf562de7ae85e48a01a4eeccbd31c6acda527e34768a8" Nov 25 07:22:30 crc kubenswrapper[4482]: E1125 07:22:30.335162 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca0856d2a99c1f1fdcbf562de7ae85e48a01a4eeccbd31c6acda527e34768a8\": container with ID starting with cca0856d2a99c1f1fdcbf562de7ae85e48a01a4eeccbd31c6acda527e34768a8 not found: ID does not exist" containerID="cca0856d2a99c1f1fdcbf562de7ae85e48a01a4eeccbd31c6acda527e34768a8" Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.335209 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca0856d2a99c1f1fdcbf562de7ae85e48a01a4eeccbd31c6acda527e34768a8"} err="failed to get container status \"cca0856d2a99c1f1fdcbf562de7ae85e48a01a4eeccbd31c6acda527e34768a8\": rpc error: code = NotFound desc = could not find container \"cca0856d2a99c1f1fdcbf562de7ae85e48a01a4eeccbd31c6acda527e34768a8\": container with ID starting with cca0856d2a99c1f1fdcbf562de7ae85e48a01a4eeccbd31c6acda527e34768a8 not found: ID does not exist" Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.335231 4482 scope.go:117] "RemoveContainer" containerID="613e03b38cb18cc968d32a8346bf371a52f3544b6944b8b9bb4a059cf93e9c18" Nov 25 07:22:30 crc kubenswrapper[4482]: E1125 07:22:30.335506 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613e03b38cb18cc968d32a8346bf371a52f3544b6944b8b9bb4a059cf93e9c18\": container with ID starting with 613e03b38cb18cc968d32a8346bf371a52f3544b6944b8b9bb4a059cf93e9c18 not found: ID does not exist" containerID="613e03b38cb18cc968d32a8346bf371a52f3544b6944b8b9bb4a059cf93e9c18" Nov 25 07:22:30 crc kubenswrapper[4482]: I1125 07:22:30.335538 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613e03b38cb18cc968d32a8346bf371a52f3544b6944b8b9bb4a059cf93e9c18"} err="failed to get container status \"613e03b38cb18cc968d32a8346bf371a52f3544b6944b8b9bb4a059cf93e9c18\": rpc error: code = NotFound desc = could not find container \"613e03b38cb18cc968d32a8346bf371a52f3544b6944b8b9bb4a059cf93e9c18\": container with ID starting with 613e03b38cb18cc968d32a8346bf371a52f3544b6944b8b9bb4a059cf93e9c18 not found: ID does not exist" Nov 25 07:22:31 crc kubenswrapper[4482]: I1125 07:22:31.837535 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded0f019-e4aa-4632-b88b-436b83ea4db3" path="/var/lib/kubelet/pods/ded0f019-e4aa-4632-b88b-436b83ea4db3/volumes" Nov 25 07:22:39 crc kubenswrapper[4482]: I1125 07:22:39.118048 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:22:39 crc kubenswrapper[4482]: I1125 07:22:39.118514 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:22:39 crc kubenswrapper[4482]: I1125 07:22:39.329723 4482 generic.go:334] "Generic (PLEG): container finished" podID="59757a63-f30a-473f-b02f-55545e2303ff" containerID="f1246569104a449178135d8749c7e9d89a36bcc1267b25d496ae8bca422dfefb" exitCode=0 Nov 25 07:22:39 crc kubenswrapper[4482]: I1125 07:22:39.329770 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" event={"ID":"59757a63-f30a-473f-b02f-55545e2303ff","Type":"ContainerDied","Data":"f1246569104a449178135d8749c7e9d89a36bcc1267b25d496ae8bca422dfefb"} Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.645379 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.835707 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h87gk\" (UniqueName: \"kubernetes.io/projected/59757a63-f30a-473f-b02f-55545e2303ff-kube-api-access-h87gk\") pod \"59757a63-f30a-473f-b02f-55545e2303ff\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.835748 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-inventory\") pod \"59757a63-f30a-473f-b02f-55545e2303ff\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.835804 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-libvirt-secret-0\") pod \"59757a63-f30a-473f-b02f-55545e2303ff\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.835834 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-ssh-key\") pod \"59757a63-f30a-473f-b02f-55545e2303ff\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.835856 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-libvirt-combined-ca-bundle\") pod \"59757a63-f30a-473f-b02f-55545e2303ff\" (UID: \"59757a63-f30a-473f-b02f-55545e2303ff\") " Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.840226 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "59757a63-f30a-473f-b02f-55545e2303ff" (UID: "59757a63-f30a-473f-b02f-55545e2303ff"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.843670 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59757a63-f30a-473f-b02f-55545e2303ff-kube-api-access-h87gk" (OuterVolumeSpecName: "kube-api-access-h87gk") pod "59757a63-f30a-473f-b02f-55545e2303ff" (UID: "59757a63-f30a-473f-b02f-55545e2303ff"). InnerVolumeSpecName "kube-api-access-h87gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.856768 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-inventory" (OuterVolumeSpecName: "inventory") pod "59757a63-f30a-473f-b02f-55545e2303ff" (UID: "59757a63-f30a-473f-b02f-55545e2303ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.859772 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "59757a63-f30a-473f-b02f-55545e2303ff" (UID: "59757a63-f30a-473f-b02f-55545e2303ff"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.860073 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "59757a63-f30a-473f-b02f-55545e2303ff" (UID: "59757a63-f30a-473f-b02f-55545e2303ff"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.937941 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h87gk\" (UniqueName: \"kubernetes.io/projected/59757a63-f30a-473f-b02f-55545e2303ff-kube-api-access-h87gk\") on node \"crc\" DevicePath \"\"" Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.937969 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.937979 4482 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.937987 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:22:40 crc kubenswrapper[4482]: I1125 07:22:40.937995 4482 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59757a63-f30a-473f-b02f-55545e2303ff-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.345556 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" event={"ID":"59757a63-f30a-473f-b02f-55545e2303ff","Type":"ContainerDied","Data":"9cf670a7b17a3ec54656ac5b205788632ff7f8985d2bc581c83703d9b403aad7"} Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.345850 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cf670a7b17a3ec54656ac5b205788632ff7f8985d2bc581c83703d9b403aad7" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.345688 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tpcgw" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.455300 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57"] Nov 25 07:22:41 crc kubenswrapper[4482]: E1125 07:22:41.455815 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded0f019-e4aa-4632-b88b-436b83ea4db3" containerName="registry-server" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.455830 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded0f019-e4aa-4632-b88b-436b83ea4db3" containerName="registry-server" Nov 25 07:22:41 crc kubenswrapper[4482]: E1125 07:22:41.455880 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded0f019-e4aa-4632-b88b-436b83ea4db3" containerName="extract-utilities" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.455887 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded0f019-e4aa-4632-b88b-436b83ea4db3" containerName="extract-utilities" Nov 25 07:22:41 crc kubenswrapper[4482]: E1125 07:22:41.455903 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59757a63-f30a-473f-b02f-55545e2303ff" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.455909 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="59757a63-f30a-473f-b02f-55545e2303ff" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 07:22:41 crc kubenswrapper[4482]: E1125 07:22:41.455934 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded0f019-e4aa-4632-b88b-436b83ea4db3" containerName="extract-content" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.455940 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded0f019-e4aa-4632-b88b-436b83ea4db3" containerName="extract-content" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.456130 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded0f019-e4aa-4632-b88b-436b83ea4db3" containerName="registry-server" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.456160 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="59757a63-f30a-473f-b02f-55545e2303ff" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.456965 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.459726 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.459906 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.460086 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.460271 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.460461 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.460556 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.462979 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.466777 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57"] Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.555829 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p59p\" (UniqueName: \"kubernetes.io/projected/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-kube-api-access-7p59p\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.555907 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.555968 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.555995 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.556019 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.556066 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.556095 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.556122 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.556159 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.657474 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.657532 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.657570 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.657616 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p59p\" (UniqueName: \"kubernetes.io/projected/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-kube-api-access-7p59p\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.657651 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.657698 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.657722 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.657741 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.657791 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.658454 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.662104 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.662242 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.662747 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.663328 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.663573 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.664704 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.665466 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.672746 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p59p\" (UniqueName: \"kubernetes.io/projected/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-kube-api-access-7p59p\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lvj57\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:41 crc kubenswrapper[4482]: I1125 07:22:41.775645 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:22:42 crc kubenswrapper[4482]: I1125 07:22:42.218890 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57"] Nov 25 07:22:42 crc kubenswrapper[4482]: I1125 07:22:42.357906 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" event={"ID":"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0","Type":"ContainerStarted","Data":"ed58679f1e2c561cd88cc0490df01995f67b898d97b9ac382c929e5e728fc592"} Nov 25 07:22:43 crc kubenswrapper[4482]: I1125 07:22:43.366245 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" event={"ID":"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0","Type":"ContainerStarted","Data":"aaf847f9a4680165892d257709b2be93669ec3553304d872bf8603f26a62f2d6"} Nov 25 07:22:43 crc kubenswrapper[4482]: I1125 07:22:43.388628 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" podStartSLOduration=1.6537237820000001 podStartE2EDuration="2.388604451s" podCreationTimestamp="2025-11-25 07:22:41 +0000 UTC" firstStartedPulling="2025-11-25 07:22:42.221306126 +0000 UTC m=+2136.709537386" lastFinishedPulling="2025-11-25 07:22:42.956186797 +0000 UTC m=+2137.444418055" observedRunningTime="2025-11-25 07:22:43.380112168 +0000 UTC m=+2137.868343427" watchObservedRunningTime="2025-11-25 07:22:43.388604451 +0000 UTC m=+2137.876835711" Nov 25 07:23:09 crc kubenswrapper[4482]: I1125 07:23:09.118060 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:23:09 crc kubenswrapper[4482]: I1125 07:23:09.118572 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:23:39 crc kubenswrapper[4482]: I1125 07:23:39.118253 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:23:39 crc kubenswrapper[4482]: I1125 07:23:39.119049 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:23:39 crc kubenswrapper[4482]: I1125 07:23:39.119111 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 07:23:39 crc kubenswrapper[4482]: I1125 07:23:39.119837 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:23:39 crc kubenswrapper[4482]: I1125 07:23:39.119910 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" gracePeriod=600 Nov 25 07:23:39 crc kubenswrapper[4482]: E1125 07:23:39.250011 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:23:39 crc kubenswrapper[4482]: I1125 07:23:39.822767 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" exitCode=0 Nov 25 07:23:39 crc kubenswrapper[4482]: I1125 07:23:39.822815 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a"} Nov 25 07:23:39 crc kubenswrapper[4482]: I1125 07:23:39.822854 4482 scope.go:117] "RemoveContainer" containerID="024965c47687aa00d0ad8db4748dfa0d2b39b80a48007bdb858861dc5eebf7f7" Nov 25 07:23:39 crc kubenswrapper[4482]: I1125 07:23:39.824278 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:23:39 crc kubenswrapper[4482]: E1125 07:23:39.824666 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:23:51 crc kubenswrapper[4482]: I1125 07:23:51.830859 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:23:51 crc kubenswrapper[4482]: E1125 07:23:51.831747 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:24:05 crc kubenswrapper[4482]: I1125 07:24:05.835317 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:24:05 crc kubenswrapper[4482]: E1125 07:24:05.835825 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:24:18 crc kubenswrapper[4482]: I1125 07:24:18.831447 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:24:18 crc kubenswrapper[4482]: E1125 07:24:18.832249 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:24:29 crc kubenswrapper[4482]: I1125 07:24:29.830821 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:24:29 crc kubenswrapper[4482]: E1125 07:24:29.831511 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:24:40 crc kubenswrapper[4482]: I1125 07:24:40.830612 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:24:40 crc kubenswrapper[4482]: E1125 07:24:40.831420 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:24:45 crc kubenswrapper[4482]: I1125 07:24:45.360601 4482 generic.go:334] "Generic (PLEG): container finished" podID="0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" containerID="aaf847f9a4680165892d257709b2be93669ec3553304d872bf8603f26a62f2d6" exitCode=0 Nov 25 07:24:45 crc kubenswrapper[4482]: I1125 07:24:45.360686 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" event={"ID":"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0","Type":"ContainerDied","Data":"aaf847f9a4680165892d257709b2be93669ec3553304d872bf8603f26a62f2d6"} Nov 25 07:24:46 crc kubenswrapper[4482]: I1125 07:24:46.845413 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:24:46 crc kubenswrapper[4482]: I1125 07:24:46.998605 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-combined-ca-bundle\") pod \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " Nov 25 07:24:46 crc kubenswrapper[4482]: I1125 07:24:46.999567 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-inventory\") pod \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " Nov 25 07:24:46 crc kubenswrapper[4482]: I1125 07:24:46.999728 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-cell1-compute-config-0\") pod \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " Nov 25 07:24:46 crc kubenswrapper[4482]: I1125 07:24:46.999811 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p59p\" (UniqueName: \"kubernetes.io/projected/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-kube-api-access-7p59p\") pod \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " Nov 25 07:24:46 crc kubenswrapper[4482]: I1125 07:24:46.999907 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-migration-ssh-key-1\") pod \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.000037 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-migration-ssh-key-0\") pod \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.000130 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-ssh-key\") pod \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.000282 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-cell1-compute-config-1\") pod \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.000376 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-extra-config-0\") pod \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\" (UID: \"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0\") " Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.009669 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-kube-api-access-7p59p" (OuterVolumeSpecName: "kube-api-access-7p59p") pod "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" (UID: "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0"). InnerVolumeSpecName "kube-api-access-7p59p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.045237 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" (UID: "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.048286 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-inventory" (OuterVolumeSpecName: "inventory") pod "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" (UID: "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.065234 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" (UID: "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.099062 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" (UID: "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.103286 4482 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.103332 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.103342 4482 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.103363 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p59p\" (UniqueName: \"kubernetes.io/projected/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-kube-api-access-7p59p\") on node \"crc\" DevicePath \"\"" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.103373 4482 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.135725 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" (UID: "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.136644 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" (UID: "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.146259 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" (UID: "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.155331 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" (UID: "0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.204158 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.204201 4482 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.204215 4482 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.204224 4482 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.378375 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" event={"ID":"0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0","Type":"ContainerDied","Data":"ed58679f1e2c561cd88cc0490df01995f67b898d97b9ac382c929e5e728fc592"} Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.378419 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed58679f1e2c561cd88cc0490df01995f67b898d97b9ac382c929e5e728fc592" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.378510 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lvj57" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.450101 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k"] Nov 25 07:24:47 crc kubenswrapper[4482]: E1125 07:24:47.450472 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.450490 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.450677 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d8a7ba6-c4c4-4d6d-9a9b-ab7151402ac0" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.451257 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.453666 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.454288 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.454433 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.454854 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fcbgq" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.455502 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.460191 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k"] Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.511197 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.511285 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.511426 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.511488 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.511638 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.511687 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.511758 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88npb\" (UniqueName: \"kubernetes.io/projected/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-kube-api-access-88npb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.612968 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88npb\" (UniqueName: \"kubernetes.io/projected/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-kube-api-access-88npb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.613297 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.613358 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.613382 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.613404 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.613466 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.613492 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.617377 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.617449 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.619012 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.619687 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.620770 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.621691 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.636428 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88npb\" (UniqueName: \"kubernetes.io/projected/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-kube-api-access-88npb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:47 crc kubenswrapper[4482]: I1125 07:24:47.771056 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:24:48 crc kubenswrapper[4482]: I1125 07:24:48.328885 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k"] Nov 25 07:24:48 crc kubenswrapper[4482]: I1125 07:24:48.385639 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" event={"ID":"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba","Type":"ContainerStarted","Data":"e4bd8490df695082408483371e8f872f955c58cc3d591384a22e0a631392c9ae"} Nov 25 07:24:49 crc kubenswrapper[4482]: I1125 07:24:49.394227 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" event={"ID":"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba","Type":"ContainerStarted","Data":"e7f51aa2b15544e47a0156ce24b5aef2f510845835337f4f77c4df887f8f95c6"} Nov 25 07:24:49 crc kubenswrapper[4482]: I1125 07:24:49.422565 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" podStartSLOduration=1.8447273069999999 podStartE2EDuration="2.422536693s" podCreationTimestamp="2025-11-25 07:24:47 +0000 UTC" firstStartedPulling="2025-11-25 07:24:48.342873064 +0000 UTC m=+2262.831104323" lastFinishedPulling="2025-11-25 07:24:48.920682451 +0000 UTC m=+2263.408913709" observedRunningTime="2025-11-25 07:24:49.417838589 +0000 UTC m=+2263.906069848" watchObservedRunningTime="2025-11-25 07:24:49.422536693 +0000 UTC m=+2263.910767952" Nov 25 07:24:54 crc kubenswrapper[4482]: I1125 07:24:54.831387 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:24:54 crc kubenswrapper[4482]: E1125 07:24:54.831906 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:25:08 crc kubenswrapper[4482]: I1125 07:25:08.830787 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:25:08 crc kubenswrapper[4482]: E1125 07:25:08.831853 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:25:21 crc kubenswrapper[4482]: I1125 07:25:21.830233 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:25:21 crc kubenswrapper[4482]: E1125 07:25:21.830889 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:25:34 crc kubenswrapper[4482]: I1125 07:25:34.831583 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:25:34 crc kubenswrapper[4482]: E1125 07:25:34.832980 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.022779 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-df5lj"] Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.025638 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.040053 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-df5lj"] Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.050915 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-catalog-content\") pod \"community-operators-df5lj\" (UID: \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\") " pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.051089 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6p5\" (UniqueName: \"kubernetes.io/projected/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-kube-api-access-5w6p5\") pod \"community-operators-df5lj\" (UID: \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\") " pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.051186 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-utilities\") pod \"community-operators-df5lj\" (UID: \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\") " pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.153275 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-catalog-content\") pod \"community-operators-df5lj\" (UID: \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\") " pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.153433 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6p5\" (UniqueName: \"kubernetes.io/projected/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-kube-api-access-5w6p5\") pod \"community-operators-df5lj\" (UID: \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\") " pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.153490 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-utilities\") pod \"community-operators-df5lj\" (UID: \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\") " pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.153881 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-catalog-content\") pod \"community-operators-df5lj\" (UID: \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\") " pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.153950 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-utilities\") pod \"community-operators-df5lj\" (UID: \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\") " pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.174558 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6p5\" (UniqueName: \"kubernetes.io/projected/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-kube-api-access-5w6p5\") pod \"community-operators-df5lj\" (UID: \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\") " pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.350627 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:37 crc kubenswrapper[4482]: I1125 07:25:37.898980 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-df5lj"] Nov 25 07:25:38 crc kubenswrapper[4482]: E1125 07:25:38.235441 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cf83f3e_f08e_4d0b_9cdb_1fda380ec2c6.slice/crio-conmon-56b89a2e475a97968960990ecce3b2dadde08ceff6dfbacff2a06d60c243af30.scope\": RecentStats: unable to find data in memory cache]" Nov 25 07:25:38 crc kubenswrapper[4482]: I1125 07:25:38.803141 4482 generic.go:334] "Generic (PLEG): container finished" podID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" containerID="56b89a2e475a97968960990ecce3b2dadde08ceff6dfbacff2a06d60c243af30" exitCode=0 Nov 25 07:25:38 crc kubenswrapper[4482]: I1125 07:25:38.803202 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df5lj" event={"ID":"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6","Type":"ContainerDied","Data":"56b89a2e475a97968960990ecce3b2dadde08ceff6dfbacff2a06d60c243af30"} Nov 25 07:25:38 crc kubenswrapper[4482]: I1125 07:25:38.803233 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df5lj" event={"ID":"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6","Type":"ContainerStarted","Data":"b6a40ff613409486d0c93d1eab8ffc997ba6ae66cea054feb80ed0d75230cc08"} Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.416864 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kl49z"] Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.419674 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.427611 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kl49z"] Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.515408 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rknnp\" (UniqueName: \"kubernetes.io/projected/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-kube-api-access-rknnp\") pod \"redhat-operators-kl49z\" (UID: \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\") " pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.515487 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-catalog-content\") pod \"redhat-operators-kl49z\" (UID: \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\") " pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.515555 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-utilities\") pod \"redhat-operators-kl49z\" (UID: \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\") " pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.617896 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rknnp\" (UniqueName: \"kubernetes.io/projected/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-kube-api-access-rknnp\") pod \"redhat-operators-kl49z\" (UID: \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\") " pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.618034 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-catalog-content\") pod \"redhat-operators-kl49z\" (UID: \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\") " pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.618117 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-utilities\") pod \"redhat-operators-kl49z\" (UID: \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\") " pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.619006 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-utilities\") pod \"redhat-operators-kl49z\" (UID: \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\") " pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.619054 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-catalog-content\") pod \"redhat-operators-kl49z\" (UID: \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\") " pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.637388 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rknnp\" (UniqueName: \"kubernetes.io/projected/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-kube-api-access-rknnp\") pod \"redhat-operators-kl49z\" (UID: \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\") " pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:39 crc kubenswrapper[4482]: I1125 07:25:39.736783 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.195798 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kl49z"] Nov 25 07:25:40 crc kubenswrapper[4482]: W1125 07:25:40.223983 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbcdbbb5_c7f4_45f6_bf76_dcfa0d0a0b27.slice/crio-10c2fa5746a899c07a54352874f1c356170c67b36b06bb60db8e1f09a6321977 WatchSource:0}: Error finding container 10c2fa5746a899c07a54352874f1c356170c67b36b06bb60db8e1f09a6321977: Status 404 returned error can't find the container with id 10c2fa5746a899c07a54352874f1c356170c67b36b06bb60db8e1f09a6321977 Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.626644 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cqjw6"] Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.628577 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.633330 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqjw6"] Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.639527 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b9db\" (UniqueName: \"kubernetes.io/projected/292b3bc0-f8e2-4202-9468-62d164a2605c-kube-api-access-5b9db\") pod \"certified-operators-cqjw6\" (UID: \"292b3bc0-f8e2-4202-9468-62d164a2605c\") " pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.639580 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292b3bc0-f8e2-4202-9468-62d164a2605c-catalog-content\") pod \"certified-operators-cqjw6\" (UID: \"292b3bc0-f8e2-4202-9468-62d164a2605c\") " pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.639697 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292b3bc0-f8e2-4202-9468-62d164a2605c-utilities\") pod \"certified-operators-cqjw6\" (UID: \"292b3bc0-f8e2-4202-9468-62d164a2605c\") " pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.741244 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292b3bc0-f8e2-4202-9468-62d164a2605c-utilities\") pod \"certified-operators-cqjw6\" (UID: \"292b3bc0-f8e2-4202-9468-62d164a2605c\") " pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.741310 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b9db\" (UniqueName: \"kubernetes.io/projected/292b3bc0-f8e2-4202-9468-62d164a2605c-kube-api-access-5b9db\") pod \"certified-operators-cqjw6\" (UID: \"292b3bc0-f8e2-4202-9468-62d164a2605c\") " pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.741347 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292b3bc0-f8e2-4202-9468-62d164a2605c-catalog-content\") pod \"certified-operators-cqjw6\" (UID: \"292b3bc0-f8e2-4202-9468-62d164a2605c\") " pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.741766 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292b3bc0-f8e2-4202-9468-62d164a2605c-catalog-content\") pod \"certified-operators-cqjw6\" (UID: \"292b3bc0-f8e2-4202-9468-62d164a2605c\") " pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.741975 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292b3bc0-f8e2-4202-9468-62d164a2605c-utilities\") pod \"certified-operators-cqjw6\" (UID: \"292b3bc0-f8e2-4202-9468-62d164a2605c\") " pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.765833 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b9db\" (UniqueName: \"kubernetes.io/projected/292b3bc0-f8e2-4202-9468-62d164a2605c-kube-api-access-5b9db\") pod \"certified-operators-cqjw6\" (UID: \"292b3bc0-f8e2-4202-9468-62d164a2605c\") " pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.823765 4482 generic.go:334] "Generic (PLEG): container finished" podID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" containerID="d0a72ac91b77eea0cc32ee3d464d0191bf27165e065b4f4adadfce24de55a052" exitCode=0 Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.823820 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kl49z" event={"ID":"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27","Type":"ContainerDied","Data":"d0a72ac91b77eea0cc32ee3d464d0191bf27165e065b4f4adadfce24de55a052"} Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.824057 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kl49z" event={"ID":"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27","Type":"ContainerStarted","Data":"10c2fa5746a899c07a54352874f1c356170c67b36b06bb60db8e1f09a6321977"} Nov 25 07:25:40 crc kubenswrapper[4482]: I1125 07:25:40.948425 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:41 crc kubenswrapper[4482]: I1125 07:25:41.518478 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqjw6"] Nov 25 07:25:41 crc kubenswrapper[4482]: W1125 07:25:41.529910 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod292b3bc0_f8e2_4202_9468_62d164a2605c.slice/crio-94f3583af20235e8c7fcea8a50784102902c37aff4ea7677d51c6fb2e5b9e4e6 WatchSource:0}: Error finding container 94f3583af20235e8c7fcea8a50784102902c37aff4ea7677d51c6fb2e5b9e4e6: Status 404 returned error can't find the container with id 94f3583af20235e8c7fcea8a50784102902c37aff4ea7677d51c6fb2e5b9e4e6 Nov 25 07:25:41 crc kubenswrapper[4482]: I1125 07:25:41.839033 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjw6" event={"ID":"292b3bc0-f8e2-4202-9468-62d164a2605c","Type":"ContainerStarted","Data":"94f3583af20235e8c7fcea8a50784102902c37aff4ea7677d51c6fb2e5b9e4e6"} Nov 25 07:25:42 crc kubenswrapper[4482]: I1125 07:25:42.851387 4482 generic.go:334] "Generic (PLEG): container finished" podID="292b3bc0-f8e2-4202-9468-62d164a2605c" containerID="93999ec7ce5c37e5c89098d4082dbfcdeba833822ff3e27f9def94404f6abc45" exitCode=0 Nov 25 07:25:42 crc kubenswrapper[4482]: I1125 07:25:42.851435 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjw6" event={"ID":"292b3bc0-f8e2-4202-9468-62d164a2605c","Type":"ContainerDied","Data":"93999ec7ce5c37e5c89098d4082dbfcdeba833822ff3e27f9def94404f6abc45"} Nov 25 07:25:43 crc kubenswrapper[4482]: I1125 07:25:43.860667 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df5lj" event={"ID":"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6","Type":"ContainerStarted","Data":"8d0d8de706ace8b577e155cd56782f6ed9dc37db69c843f1b5d36365c9c0044d"} Nov 25 07:25:43 crc kubenswrapper[4482]: I1125 07:25:43.862998 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kl49z" event={"ID":"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27","Type":"ContainerStarted","Data":"b8116245b917d995362e072dcf556ba55130e38b7c708b7d5f484c1cd593a1d2"} Nov 25 07:25:44 crc kubenswrapper[4482]: I1125 07:25:44.873449 4482 generic.go:334] "Generic (PLEG): container finished" podID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" containerID="8d0d8de706ace8b577e155cd56782f6ed9dc37db69c843f1b5d36365c9c0044d" exitCode=0 Nov 25 07:25:44 crc kubenswrapper[4482]: I1125 07:25:44.873553 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df5lj" event={"ID":"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6","Type":"ContainerDied","Data":"8d0d8de706ace8b577e155cd56782f6ed9dc37db69c843f1b5d36365c9c0044d"} Nov 25 07:25:44 crc kubenswrapper[4482]: I1125 07:25:44.878036 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjw6" event={"ID":"292b3bc0-f8e2-4202-9468-62d164a2605c","Type":"ContainerStarted","Data":"86e2af3197c1c2c542efa515a2a666058c3a721a8aaf6aad47b2498e90473e94"} Nov 25 07:25:45 crc kubenswrapper[4482]: I1125 07:25:45.837613 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:25:45 crc kubenswrapper[4482]: E1125 07:25:45.838367 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:25:45 crc kubenswrapper[4482]: I1125 07:25:45.890076 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df5lj" event={"ID":"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6","Type":"ContainerStarted","Data":"f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718"} Nov 25 07:25:45 crc kubenswrapper[4482]: I1125 07:25:45.909738 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-df5lj" podStartSLOduration=2.369370791 podStartE2EDuration="8.909719846s" podCreationTimestamp="2025-11-25 07:25:37 +0000 UTC" firstStartedPulling="2025-11-25 07:25:38.805638154 +0000 UTC m=+2313.293869413" lastFinishedPulling="2025-11-25 07:25:45.345987209 +0000 UTC m=+2319.834218468" observedRunningTime="2025-11-25 07:25:45.903835898 +0000 UTC m=+2320.392067157" watchObservedRunningTime="2025-11-25 07:25:45.909719846 +0000 UTC m=+2320.397951105" Nov 25 07:25:46 crc kubenswrapper[4482]: I1125 07:25:46.900671 4482 generic.go:334] "Generic (PLEG): container finished" podID="292b3bc0-f8e2-4202-9468-62d164a2605c" containerID="86e2af3197c1c2c542efa515a2a666058c3a721a8aaf6aad47b2498e90473e94" exitCode=0 Nov 25 07:25:46 crc kubenswrapper[4482]: I1125 07:25:46.900719 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjw6" event={"ID":"292b3bc0-f8e2-4202-9468-62d164a2605c","Type":"ContainerDied","Data":"86e2af3197c1c2c542efa515a2a666058c3a721a8aaf6aad47b2498e90473e94"} Nov 25 07:25:46 crc kubenswrapper[4482]: I1125 07:25:46.904868 4482 generic.go:334] "Generic (PLEG): container finished" podID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" containerID="b8116245b917d995362e072dcf556ba55130e38b7c708b7d5f484c1cd593a1d2" exitCode=0 Nov 25 07:25:46 crc kubenswrapper[4482]: I1125 07:25:46.904958 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kl49z" event={"ID":"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27","Type":"ContainerDied","Data":"b8116245b917d995362e072dcf556ba55130e38b7c708b7d5f484c1cd593a1d2"} Nov 25 07:25:47 crc kubenswrapper[4482]: I1125 07:25:47.351145 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:47 crc kubenswrapper[4482]: I1125 07:25:47.351244 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:47 crc kubenswrapper[4482]: I1125 07:25:47.918815 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjw6" event={"ID":"292b3bc0-f8e2-4202-9468-62d164a2605c","Type":"ContainerStarted","Data":"574595656dbb79b6c6dcfcb6868412300e4d051e803526534c5c38b731ab098a"} Nov 25 07:25:47 crc kubenswrapper[4482]: I1125 07:25:47.920573 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kl49z" event={"ID":"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27","Type":"ContainerStarted","Data":"515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a"} Nov 25 07:25:47 crc kubenswrapper[4482]: I1125 07:25:47.940393 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cqjw6" podStartSLOduration=3.969788014 podStartE2EDuration="7.94037887s" podCreationTimestamp="2025-11-25 07:25:40 +0000 UTC" firstStartedPulling="2025-11-25 07:25:43.405702521 +0000 UTC m=+2317.893933780" lastFinishedPulling="2025-11-25 07:25:47.376293378 +0000 UTC m=+2321.864524636" observedRunningTime="2025-11-25 07:25:47.938305383 +0000 UTC m=+2322.426536643" watchObservedRunningTime="2025-11-25 07:25:47.94037887 +0000 UTC m=+2322.428610129" Nov 25 07:25:47 crc kubenswrapper[4482]: I1125 07:25:47.964764 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kl49z" podStartSLOduration=2.407385117 podStartE2EDuration="8.964747205s" podCreationTimestamp="2025-11-25 07:25:39 +0000 UTC" firstStartedPulling="2025-11-25 07:25:40.825481164 +0000 UTC m=+2315.313712423" lastFinishedPulling="2025-11-25 07:25:47.382843252 +0000 UTC m=+2321.871074511" observedRunningTime="2025-11-25 07:25:47.958677286 +0000 UTC m=+2322.446908544" watchObservedRunningTime="2025-11-25 07:25:47.964747205 +0000 UTC m=+2322.452978464" Nov 25 07:25:48 crc kubenswrapper[4482]: I1125 07:25:48.397766 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-df5lj" podUID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" containerName="registry-server" probeResult="failure" output=< Nov 25 07:25:48 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 07:25:48 crc kubenswrapper[4482]: > Nov 25 07:25:49 crc kubenswrapper[4482]: I1125 07:25:49.737437 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:49 crc kubenswrapper[4482]: I1125 07:25:49.737724 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:50 crc kubenswrapper[4482]: I1125 07:25:50.773962 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kl49z" podUID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" containerName="registry-server" probeResult="failure" output=< Nov 25 07:25:50 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 07:25:50 crc kubenswrapper[4482]: > Nov 25 07:25:50 crc kubenswrapper[4482]: I1125 07:25:50.949348 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:50 crc kubenswrapper[4482]: I1125 07:25:50.949406 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:50 crc kubenswrapper[4482]: I1125 07:25:50.985880 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:25:57 crc kubenswrapper[4482]: I1125 07:25:57.385569 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:57 crc kubenswrapper[4482]: I1125 07:25:57.425844 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-df5lj" Nov 25 07:25:57 crc kubenswrapper[4482]: I1125 07:25:57.503418 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-df5lj"] Nov 25 07:25:57 crc kubenswrapper[4482]: I1125 07:25:57.616697 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69xjj"] Nov 25 07:25:57 crc kubenswrapper[4482]: I1125 07:25:57.616927 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-69xjj" podUID="9940aeba-b78c-4271-9748-02d3200887f8" containerName="registry-server" containerID="cri-o://a99c225b604127ea1387c6334ca3c19e91a0bb06220f94dd01b9487f8235c0b7" gracePeriod=2 Nov 25 07:25:58 crc kubenswrapper[4482]: I1125 07:25:58.030662 4482 generic.go:334] "Generic (PLEG): container finished" podID="9940aeba-b78c-4271-9748-02d3200887f8" containerID="a99c225b604127ea1387c6334ca3c19e91a0bb06220f94dd01b9487f8235c0b7" exitCode=0 Nov 25 07:25:58 crc kubenswrapper[4482]: I1125 07:25:58.035314 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69xjj" event={"ID":"9940aeba-b78c-4271-9748-02d3200887f8","Type":"ContainerDied","Data":"a99c225b604127ea1387c6334ca3c19e91a0bb06220f94dd01b9487f8235c0b7"} Nov 25 07:25:58 crc kubenswrapper[4482]: I1125 07:25:58.162561 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69xjj" Nov 25 07:25:58 crc kubenswrapper[4482]: I1125 07:25:58.198027 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9940aeba-b78c-4271-9748-02d3200887f8-catalog-content\") pod \"9940aeba-b78c-4271-9748-02d3200887f8\" (UID: \"9940aeba-b78c-4271-9748-02d3200887f8\") " Nov 25 07:25:58 crc kubenswrapper[4482]: I1125 07:25:58.198321 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9940aeba-b78c-4271-9748-02d3200887f8-utilities\") pod \"9940aeba-b78c-4271-9748-02d3200887f8\" (UID: \"9940aeba-b78c-4271-9748-02d3200887f8\") " Nov 25 07:25:58 crc kubenswrapper[4482]: I1125 07:25:58.198395 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8t7g\" (UniqueName: \"kubernetes.io/projected/9940aeba-b78c-4271-9748-02d3200887f8-kube-api-access-h8t7g\") pod \"9940aeba-b78c-4271-9748-02d3200887f8\" (UID: \"9940aeba-b78c-4271-9748-02d3200887f8\") " Nov 25 07:25:58 crc kubenswrapper[4482]: I1125 07:25:58.205045 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9940aeba-b78c-4271-9748-02d3200887f8-utilities" (OuterVolumeSpecName: "utilities") pod "9940aeba-b78c-4271-9748-02d3200887f8" (UID: "9940aeba-b78c-4271-9748-02d3200887f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:25:58 crc kubenswrapper[4482]: I1125 07:25:58.230378 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9940aeba-b78c-4271-9748-02d3200887f8-kube-api-access-h8t7g" (OuterVolumeSpecName: "kube-api-access-h8t7g") pod "9940aeba-b78c-4271-9748-02d3200887f8" (UID: "9940aeba-b78c-4271-9748-02d3200887f8"). InnerVolumeSpecName "kube-api-access-h8t7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:25:58 crc kubenswrapper[4482]: I1125 07:25:58.261448 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9940aeba-b78c-4271-9748-02d3200887f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9940aeba-b78c-4271-9748-02d3200887f8" (UID: "9940aeba-b78c-4271-9748-02d3200887f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:25:58 crc kubenswrapper[4482]: I1125 07:25:58.301087 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9940aeba-b78c-4271-9748-02d3200887f8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:25:58 crc kubenswrapper[4482]: I1125 07:25:58.301117 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8t7g\" (UniqueName: \"kubernetes.io/projected/9940aeba-b78c-4271-9748-02d3200887f8-kube-api-access-h8t7g\") on node \"crc\" DevicePath \"\"" Nov 25 07:25:58 crc kubenswrapper[4482]: I1125 07:25:58.301129 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9940aeba-b78c-4271-9748-02d3200887f8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:25:59 crc kubenswrapper[4482]: I1125 07:25:59.046385 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69xjj" Nov 25 07:25:59 crc kubenswrapper[4482]: I1125 07:25:59.050350 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69xjj" event={"ID":"9940aeba-b78c-4271-9748-02d3200887f8","Type":"ContainerDied","Data":"55741f332f6bc55d9504fe8811e198019edf3fe82da886aed437e861d00ec396"} Nov 25 07:25:59 crc kubenswrapper[4482]: I1125 07:25:59.050429 4482 scope.go:117] "RemoveContainer" containerID="a99c225b604127ea1387c6334ca3c19e91a0bb06220f94dd01b9487f8235c0b7" Nov 25 07:25:59 crc kubenswrapper[4482]: I1125 07:25:59.083472 4482 scope.go:117] "RemoveContainer" containerID="111dae44ca1f235f4c7530176408c328e38b94c6e9d2539d5112c1fa358e2d7a" Nov 25 07:25:59 crc kubenswrapper[4482]: I1125 07:25:59.096671 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69xjj"] Nov 25 07:25:59 crc kubenswrapper[4482]: I1125 07:25:59.109006 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-69xjj"] Nov 25 07:25:59 crc kubenswrapper[4482]: I1125 07:25:59.133445 4482 scope.go:117] "RemoveContainer" containerID="bcb197619c9355d1338edb74f18faa7513b378fcda97decc00bd993b86d48c88" Nov 25 07:25:59 crc kubenswrapper[4482]: I1125 07:25:59.771364 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:59 crc kubenswrapper[4482]: I1125 07:25:59.807297 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:25:59 crc kubenswrapper[4482]: I1125 07:25:59.839121 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9940aeba-b78c-4271-9748-02d3200887f8" path="/var/lib/kubelet/pods/9940aeba-b78c-4271-9748-02d3200887f8/volumes" Nov 25 07:26:00 crc kubenswrapper[4482]: I1125 07:26:00.831297 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:26:00 crc kubenswrapper[4482]: E1125 07:26:00.831579 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:26:00 crc kubenswrapper[4482]: I1125 07:26:00.990325 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:26:02 crc kubenswrapper[4482]: I1125 07:26:02.022038 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kl49z"] Nov 25 07:26:02 crc kubenswrapper[4482]: I1125 07:26:02.023397 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kl49z" podUID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" containerName="registry-server" containerID="cri-o://515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a" gracePeriod=2 Nov 25 07:26:02 crc kubenswrapper[4482]: I1125 07:26:02.432016 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:26:02 crc kubenswrapper[4482]: I1125 07:26:02.468112 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rknnp\" (UniqueName: \"kubernetes.io/projected/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-kube-api-access-rknnp\") pod \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\" (UID: \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\") " Nov 25 07:26:02 crc kubenswrapper[4482]: I1125 07:26:02.468214 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-catalog-content\") pod \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\" (UID: \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\") " Nov 25 07:26:02 crc kubenswrapper[4482]: I1125 07:26:02.468284 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-utilities\") pod \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\" (UID: \"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27\") " Nov 25 07:26:02 crc kubenswrapper[4482]: I1125 07:26:02.469260 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-utilities" (OuterVolumeSpecName: "utilities") pod "dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" (UID: "dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:26:02 crc kubenswrapper[4482]: I1125 07:26:02.479353 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-kube-api-access-rknnp" (OuterVolumeSpecName: "kube-api-access-rknnp") pod "dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" (UID: "dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27"). InnerVolumeSpecName "kube-api-access-rknnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:26:02 crc kubenswrapper[4482]: I1125 07:26:02.552454 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" (UID: "dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:26:02 crc kubenswrapper[4482]: I1125 07:26:02.569918 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rknnp\" (UniqueName: \"kubernetes.io/projected/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-kube-api-access-rknnp\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:02 crc kubenswrapper[4482]: I1125 07:26:02.569948 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:02 crc kubenswrapper[4482]: I1125 07:26:02.569958 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.082704 4482 generic.go:334] "Generic (PLEG): container finished" podID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" containerID="515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a" exitCode=0 Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.082778 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kl49z" Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.082815 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kl49z" event={"ID":"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27","Type":"ContainerDied","Data":"515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a"} Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.083499 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kl49z" event={"ID":"dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27","Type":"ContainerDied","Data":"10c2fa5746a899c07a54352874f1c356170c67b36b06bb60db8e1f09a6321977"} Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.083533 4482 scope.go:117] "RemoveContainer" containerID="515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a" Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.100425 4482 scope.go:117] "RemoveContainer" containerID="b8116245b917d995362e072dcf556ba55130e38b7c708b7d5f484c1cd593a1d2" Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.111420 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kl49z"] Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.116639 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kl49z"] Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.127107 4482 scope.go:117] "RemoveContainer" containerID="d0a72ac91b77eea0cc32ee3d464d0191bf27165e065b4f4adadfce24de55a052" Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.151500 4482 scope.go:117] "RemoveContainer" containerID="515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a" Nov 25 07:26:03 crc kubenswrapper[4482]: E1125 07:26:03.151815 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a\": container with ID starting with 515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a not found: ID does not exist" containerID="515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a" Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.151844 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a"} err="failed to get container status \"515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a\": rpc error: code = NotFound desc = could not find container \"515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a\": container with ID starting with 515353fbaf12af0ef29554186fedbfe3547fbea07ceafcbdb20bed320aace04a not found: ID does not exist" Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.151866 4482 scope.go:117] "RemoveContainer" containerID="b8116245b917d995362e072dcf556ba55130e38b7c708b7d5f484c1cd593a1d2" Nov 25 07:26:03 crc kubenswrapper[4482]: E1125 07:26:03.152143 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8116245b917d995362e072dcf556ba55130e38b7c708b7d5f484c1cd593a1d2\": container with ID starting with b8116245b917d995362e072dcf556ba55130e38b7c708b7d5f484c1cd593a1d2 not found: ID does not exist" containerID="b8116245b917d995362e072dcf556ba55130e38b7c708b7d5f484c1cd593a1d2" Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.152196 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8116245b917d995362e072dcf556ba55130e38b7c708b7d5f484c1cd593a1d2"} err="failed to get container status \"b8116245b917d995362e072dcf556ba55130e38b7c708b7d5f484c1cd593a1d2\": rpc error: code = NotFound desc = could not find container \"b8116245b917d995362e072dcf556ba55130e38b7c708b7d5f484c1cd593a1d2\": container with ID starting with b8116245b917d995362e072dcf556ba55130e38b7c708b7d5f484c1cd593a1d2 not found: ID does not exist" Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.152225 4482 scope.go:117] "RemoveContainer" containerID="d0a72ac91b77eea0cc32ee3d464d0191bf27165e065b4f4adadfce24de55a052" Nov 25 07:26:03 crc kubenswrapper[4482]: E1125 07:26:03.152686 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a72ac91b77eea0cc32ee3d464d0191bf27165e065b4f4adadfce24de55a052\": container with ID starting with d0a72ac91b77eea0cc32ee3d464d0191bf27165e065b4f4adadfce24de55a052 not found: ID does not exist" containerID="d0a72ac91b77eea0cc32ee3d464d0191bf27165e065b4f4adadfce24de55a052" Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.152748 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a72ac91b77eea0cc32ee3d464d0191bf27165e065b4f4adadfce24de55a052"} err="failed to get container status \"d0a72ac91b77eea0cc32ee3d464d0191bf27165e065b4f4adadfce24de55a052\": rpc error: code = NotFound desc = could not find container \"d0a72ac91b77eea0cc32ee3d464d0191bf27165e065b4f4adadfce24de55a052\": container with ID starting with d0a72ac91b77eea0cc32ee3d464d0191bf27165e065b4f4adadfce24de55a052 not found: ID does not exist" Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.816633 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cqjw6"] Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.817220 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cqjw6" podUID="292b3bc0-f8e2-4202-9468-62d164a2605c" containerName="registry-server" containerID="cri-o://574595656dbb79b6c6dcfcb6868412300e4d051e803526534c5c38b731ab098a" gracePeriod=2 Nov 25 07:26:03 crc kubenswrapper[4482]: I1125 07:26:03.839880 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" path="/var/lib/kubelet/pods/dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27/volumes" Nov 25 07:26:04 crc kubenswrapper[4482]: I1125 07:26:04.114604 4482 generic.go:334] "Generic (PLEG): container finished" podID="292b3bc0-f8e2-4202-9468-62d164a2605c" containerID="574595656dbb79b6c6dcfcb6868412300e4d051e803526534c5c38b731ab098a" exitCode=0 Nov 25 07:26:04 crc kubenswrapper[4482]: I1125 07:26:04.114701 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjw6" event={"ID":"292b3bc0-f8e2-4202-9468-62d164a2605c","Type":"ContainerDied","Data":"574595656dbb79b6c6dcfcb6868412300e4d051e803526534c5c38b731ab098a"} Nov 25 07:26:04 crc kubenswrapper[4482]: I1125 07:26:04.255012 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:26:04 crc kubenswrapper[4482]: I1125 07:26:04.406470 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b9db\" (UniqueName: \"kubernetes.io/projected/292b3bc0-f8e2-4202-9468-62d164a2605c-kube-api-access-5b9db\") pod \"292b3bc0-f8e2-4202-9468-62d164a2605c\" (UID: \"292b3bc0-f8e2-4202-9468-62d164a2605c\") " Nov 25 07:26:04 crc kubenswrapper[4482]: I1125 07:26:04.406696 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292b3bc0-f8e2-4202-9468-62d164a2605c-catalog-content\") pod \"292b3bc0-f8e2-4202-9468-62d164a2605c\" (UID: \"292b3bc0-f8e2-4202-9468-62d164a2605c\") " Nov 25 07:26:04 crc kubenswrapper[4482]: I1125 07:26:04.406899 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292b3bc0-f8e2-4202-9468-62d164a2605c-utilities\") pod \"292b3bc0-f8e2-4202-9468-62d164a2605c\" (UID: \"292b3bc0-f8e2-4202-9468-62d164a2605c\") " Nov 25 07:26:04 crc kubenswrapper[4482]: I1125 07:26:04.408225 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/292b3bc0-f8e2-4202-9468-62d164a2605c-utilities" (OuterVolumeSpecName: "utilities") pod "292b3bc0-f8e2-4202-9468-62d164a2605c" (UID: "292b3bc0-f8e2-4202-9468-62d164a2605c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:26:04 crc kubenswrapper[4482]: I1125 07:26:04.411317 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/292b3bc0-f8e2-4202-9468-62d164a2605c-kube-api-access-5b9db" (OuterVolumeSpecName: "kube-api-access-5b9db") pod "292b3bc0-f8e2-4202-9468-62d164a2605c" (UID: "292b3bc0-f8e2-4202-9468-62d164a2605c"). InnerVolumeSpecName "kube-api-access-5b9db". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:26:04 crc kubenswrapper[4482]: I1125 07:26:04.457541 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/292b3bc0-f8e2-4202-9468-62d164a2605c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "292b3bc0-f8e2-4202-9468-62d164a2605c" (UID: "292b3bc0-f8e2-4202-9468-62d164a2605c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:26:04 crc kubenswrapper[4482]: I1125 07:26:04.509564 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292b3bc0-f8e2-4202-9468-62d164a2605c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:04 crc kubenswrapper[4482]: I1125 07:26:04.509601 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292b3bc0-f8e2-4202-9468-62d164a2605c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:04 crc kubenswrapper[4482]: I1125 07:26:04.509611 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b9db\" (UniqueName: \"kubernetes.io/projected/292b3bc0-f8e2-4202-9468-62d164a2605c-kube-api-access-5b9db\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:05 crc kubenswrapper[4482]: I1125 07:26:05.136331 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjw6" event={"ID":"292b3bc0-f8e2-4202-9468-62d164a2605c","Type":"ContainerDied","Data":"94f3583af20235e8c7fcea8a50784102902c37aff4ea7677d51c6fb2e5b9e4e6"} Nov 25 07:26:05 crc kubenswrapper[4482]: I1125 07:26:05.136421 4482 scope.go:117] "RemoveContainer" containerID="574595656dbb79b6c6dcfcb6868412300e4d051e803526534c5c38b731ab098a" Nov 25 07:26:05 crc kubenswrapper[4482]: I1125 07:26:05.136968 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqjw6" Nov 25 07:26:05 crc kubenswrapper[4482]: I1125 07:26:05.162159 4482 scope.go:117] "RemoveContainer" containerID="86e2af3197c1c2c542efa515a2a666058c3a721a8aaf6aad47b2498e90473e94" Nov 25 07:26:05 crc kubenswrapper[4482]: I1125 07:26:05.165931 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cqjw6"] Nov 25 07:26:05 crc kubenswrapper[4482]: I1125 07:26:05.172411 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cqjw6"] Nov 25 07:26:05 crc kubenswrapper[4482]: I1125 07:26:05.214839 4482 scope.go:117] "RemoveContainer" containerID="93999ec7ce5c37e5c89098d4082dbfcdeba833822ff3e27f9def94404f6abc45" Nov 25 07:26:05 crc kubenswrapper[4482]: I1125 07:26:05.841180 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="292b3bc0-f8e2-4202-9468-62d164a2605c" path="/var/lib/kubelet/pods/292b3bc0-f8e2-4202-9468-62d164a2605c/volumes" Nov 25 07:26:13 crc kubenswrapper[4482]: I1125 07:26:13.830407 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:26:13 crc kubenswrapper[4482]: E1125 07:26:13.831206 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:26:27 crc kubenswrapper[4482]: I1125 07:26:27.831852 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:26:27 crc kubenswrapper[4482]: E1125 07:26:27.832437 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:26:41 crc kubenswrapper[4482]: I1125 07:26:41.830832 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:26:41 crc kubenswrapper[4482]: E1125 07:26:41.831626 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:26:46 crc kubenswrapper[4482]: I1125 07:26:46.456400 4482 generic.go:334] "Generic (PLEG): container finished" podID="6c415d8d-5722-46f4-bf0b-4ffd4c5662ba" containerID="e7f51aa2b15544e47a0156ce24b5aef2f510845835337f4f77c4df887f8f95c6" exitCode=0 Nov 25 07:26:46 crc kubenswrapper[4482]: I1125 07:26:46.456452 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" event={"ID":"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba","Type":"ContainerDied","Data":"e7f51aa2b15544e47a0156ce24b5aef2f510845835337f4f77c4df887f8f95c6"} Nov 25 07:26:47 crc kubenswrapper[4482]: I1125 07:26:47.793144 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:26:47 crc kubenswrapper[4482]: I1125 07:26:47.988256 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-inventory\") pod \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " Nov 25 07:26:47 crc kubenswrapper[4482]: I1125 07:26:47.988366 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-0\") pod \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " Nov 25 07:26:47 crc kubenswrapper[4482]: I1125 07:26:47.988503 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-2\") pod \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " Nov 25 07:26:47 crc kubenswrapper[4482]: I1125 07:26:47.988526 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88npb\" (UniqueName: \"kubernetes.io/projected/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-kube-api-access-88npb\") pod \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " Nov 25 07:26:47 crc kubenswrapper[4482]: I1125 07:26:47.988548 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-telemetry-combined-ca-bundle\") pod \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " Nov 25 07:26:47 crc kubenswrapper[4482]: I1125 07:26:47.988610 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-1\") pod \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " Nov 25 07:26:47 crc kubenswrapper[4482]: I1125 07:26:47.988637 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ssh-key\") pod \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\" (UID: \"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba\") " Nov 25 07:26:47 crc kubenswrapper[4482]: I1125 07:26:47.993747 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-kube-api-access-88npb" (OuterVolumeSpecName: "kube-api-access-88npb") pod "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba" (UID: "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba"). InnerVolumeSpecName "kube-api-access-88npb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:26:47 crc kubenswrapper[4482]: I1125 07:26:47.993848 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba" (UID: "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.011096 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba" (UID: "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.011924 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-inventory" (OuterVolumeSpecName: "inventory") pod "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba" (UID: "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.012694 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba" (UID: "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.013053 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba" (UID: "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.017870 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba" (UID: "6c415d8d-5722-46f4-bf0b-4ffd4c5662ba"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.091094 4482 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.091345 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.091396 4482 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.091407 4482 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.091418 4482 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.091430 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88npb\" (UniqueName: \"kubernetes.io/projected/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-kube-api-access-88npb\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.091441 4482 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c415d8d-5722-46f4-bf0b-4ffd4c5662ba-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.474634 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" event={"ID":"6c415d8d-5722-46f4-bf0b-4ffd4c5662ba","Type":"ContainerDied","Data":"e4bd8490df695082408483371e8f872f955c58cc3d591384a22e0a631392c9ae"} Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.474678 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4bd8490df695082408483371e8f872f955c58cc3d591384a22e0a631392c9ae" Nov 25 07:26:48 crc kubenswrapper[4482]: I1125 07:26:48.474691 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cjs5k" Nov 25 07:26:55 crc kubenswrapper[4482]: I1125 07:26:55.840666 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:26:55 crc kubenswrapper[4482]: E1125 07:26:55.841371 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:27:10 crc kubenswrapper[4482]: I1125 07:27:10.832933 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:27:10 crc kubenswrapper[4482]: E1125 07:27:10.834118 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:27:21 crc kubenswrapper[4482]: I1125 07:27:21.830690 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:27:21 crc kubenswrapper[4482]: E1125 07:27:21.831625 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:27:32 crc kubenswrapper[4482]: I1125 07:27:32.831794 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:27:32 crc kubenswrapper[4482]: E1125 07:27:32.832919 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.584806 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Nov 25 07:27:39 crc kubenswrapper[4482]: E1125 07:27:39.585823 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292b3bc0-f8e2-4202-9468-62d164a2605c" containerName="extract-utilities" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.585838 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="292b3bc0-f8e2-4202-9468-62d164a2605c" containerName="extract-utilities" Nov 25 07:27:39 crc kubenswrapper[4482]: E1125 07:27:39.585855 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292b3bc0-f8e2-4202-9468-62d164a2605c" containerName="registry-server" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.585860 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="292b3bc0-f8e2-4202-9468-62d164a2605c" containerName="registry-server" Nov 25 07:27:39 crc kubenswrapper[4482]: E1125 07:27:39.585882 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" containerName="extract-content" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.585886 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" containerName="extract-content" Nov 25 07:27:39 crc kubenswrapper[4482]: E1125 07:27:39.585896 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9940aeba-b78c-4271-9748-02d3200887f8" containerName="extract-content" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.585903 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9940aeba-b78c-4271-9748-02d3200887f8" containerName="extract-content" Nov 25 07:27:39 crc kubenswrapper[4482]: E1125 07:27:39.585914 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c415d8d-5722-46f4-bf0b-4ffd4c5662ba" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.585922 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c415d8d-5722-46f4-bf0b-4ffd4c5662ba" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 07:27:39 crc kubenswrapper[4482]: E1125 07:27:39.585939 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" containerName="extract-utilities" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.585947 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" containerName="extract-utilities" Nov 25 07:27:39 crc kubenswrapper[4482]: E1125 07:27:39.585956 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292b3bc0-f8e2-4202-9468-62d164a2605c" containerName="extract-content" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.585962 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="292b3bc0-f8e2-4202-9468-62d164a2605c" containerName="extract-content" Nov 25 07:27:39 crc kubenswrapper[4482]: E1125 07:27:39.585972 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" containerName="registry-server" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.585979 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" containerName="registry-server" Nov 25 07:27:39 crc kubenswrapper[4482]: E1125 07:27:39.585988 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9940aeba-b78c-4271-9748-02d3200887f8" containerName="registry-server" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.585994 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9940aeba-b78c-4271-9748-02d3200887f8" containerName="registry-server" Nov 25 07:27:39 crc kubenswrapper[4482]: E1125 07:27:39.586011 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9940aeba-b78c-4271-9748-02d3200887f8" containerName="extract-utilities" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.586018 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9940aeba-b78c-4271-9748-02d3200887f8" containerName="extract-utilities" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.586215 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="9940aeba-b78c-4271-9748-02d3200887f8" containerName="registry-server" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.586226 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c415d8d-5722-46f4-bf0b-4ffd4c5662ba" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.586244 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="292b3bc0-f8e2-4202-9468-62d164a2605c" containerName="registry-server" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.586254 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbcdbbb5-c7f4-45f6-bf76-dcfa0d0a0b27" containerName="registry-server" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.586969 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.589449 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rldsl" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.589933 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.590522 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.590577 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.604627 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.641906 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.642044 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da456db2-5bd8-40d0-a229-036a6f9b95f7-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.642131 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/da456db2-5bd8-40d0-a229-036a6f9b95f7-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.744771 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.744854 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.744965 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.745015 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/da456db2-5bd8-40d0-a229-036a6f9b95f7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.745158 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da456db2-5bd8-40d0-a229-036a6f9b95f7-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.745399 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/da456db2-5bd8-40d0-a229-036a6f9b95f7-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.745508 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87zgz\" (UniqueName: \"kubernetes.io/projected/da456db2-5bd8-40d0-a229-036a6f9b95f7-kube-api-access-87zgz\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.745583 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/da456db2-5bd8-40d0-a229-036a6f9b95f7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.745661 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.746724 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da456db2-5bd8-40d0-a229-036a6f9b95f7-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.746734 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/da456db2-5bd8-40d0-a229-036a6f9b95f7-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.755025 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.847799 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87zgz\" (UniqueName: \"kubernetes.io/projected/da456db2-5bd8-40d0-a229-036a6f9b95f7-kube-api-access-87zgz\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.848048 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/da456db2-5bd8-40d0-a229-036a6f9b95f7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.848158 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.848348 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.848458 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.848575 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/da456db2-5bd8-40d0-a229-036a6f9b95f7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.848576 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/da456db2-5bd8-40d0-a229-036a6f9b95f7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.849098 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/da456db2-5bd8-40d0-a229-036a6f9b95f7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.851725 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.852409 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.853134 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.863877 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87zgz\" (UniqueName: \"kubernetes.io/projected/da456db2-5bd8-40d0-a229-036a6f9b95f7-kube-api-access-87zgz\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.879254 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:39 crc kubenswrapper[4482]: I1125 07:27:39.914976 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 07:27:40 crc kubenswrapper[4482]: I1125 07:27:40.390002 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Nov 25 07:27:40 crc kubenswrapper[4482]: I1125 07:27:40.392515 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:27:40 crc kubenswrapper[4482]: I1125 07:27:40.947112 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"da456db2-5bd8-40d0-a229-036a6f9b95f7","Type":"ContainerStarted","Data":"7c6e54c13b8c50c03c7d9186500c1891d89f7d0972bc0b96de272880977a6b3c"} Nov 25 07:27:43 crc kubenswrapper[4482]: I1125 07:27:43.831657 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:27:43 crc kubenswrapper[4482]: E1125 07:27:43.832391 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:27:55 crc kubenswrapper[4482]: I1125 07:27:55.836245 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:27:55 crc kubenswrapper[4482]: E1125 07:27:55.836996 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:28:08 crc kubenswrapper[4482]: I1125 07:28:08.831045 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:28:08 crc kubenswrapper[4482]: E1125 07:28:08.831888 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:28:21 crc kubenswrapper[4482]: I1125 07:28:21.831753 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:28:21 crc kubenswrapper[4482]: E1125 07:28:21.832966 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:28:32 crc kubenswrapper[4482]: I1125 07:28:32.831595 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:28:32 crc kubenswrapper[4482]: E1125 07:28:32.832338 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:28:47 crc kubenswrapper[4482]: I1125 07:28:47.831996 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:28:51 crc kubenswrapper[4482]: E1125 07:28:51.119582 4482 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:1f5c0439f2433cb462b222a5bb23e629" Nov 25 07:28:51 crc kubenswrapper[4482]: E1125 07:28:51.120201 4482 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:1f5c0439f2433cb462b222a5bb23e629" Nov 25 07:28:51 crc kubenswrapper[4482]: E1125 07:28:51.122343 4482 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:1f5c0439f2433cb462b222a5bb23e629,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87zgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(da456db2-5bd8-40d0-a229-036a6f9b95f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:28:51 crc kubenswrapper[4482]: E1125 07:28:51.123559 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="da456db2-5bd8-40d0-a229-036a6f9b95f7" Nov 25 07:28:51 crc kubenswrapper[4482]: I1125 07:28:51.656552 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"18e4c2c5821fe7617b4339bbb03fb79a5a37e5b95cbf929202dd3482f0e7421f"} Nov 25 07:28:51 crc kubenswrapper[4482]: E1125 07:28:51.660504 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:1f5c0439f2433cb462b222a5bb23e629\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="da456db2-5bd8-40d0-a229-036a6f9b95f7" Nov 25 07:29:04 crc kubenswrapper[4482]: I1125 07:29:04.520358 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 07:29:05 crc kubenswrapper[4482]: I1125 07:29:05.794594 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"da456db2-5bd8-40d0-a229-036a6f9b95f7","Type":"ContainerStarted","Data":"2138fc733930429e75b5b9925fbb6449f1437945455fc818dc278cc3afcb0d01"} Nov 25 07:29:05 crc kubenswrapper[4482]: I1125 07:29:05.823987 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=3.698812642 podStartE2EDuration="1m27.823968591s" podCreationTimestamp="2025-11-25 07:27:38 +0000 UTC" firstStartedPulling="2025-11-25 07:27:40.392238127 +0000 UTC m=+2434.880469386" lastFinishedPulling="2025-11-25 07:29:04.517394086 +0000 UTC m=+2519.005625335" observedRunningTime="2025-11-25 07:29:05.818417911 +0000 UTC m=+2520.306649160" watchObservedRunningTime="2025-11-25 07:29:05.823968591 +0000 UTC m=+2520.312199850" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.159682 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57"] Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.162449 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.166398 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.172012 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.179488 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57"] Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.248543 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18e69ffe-f523-4858-9f42-6f7d85a590a3-secret-volume\") pod \"collect-profiles-29400930-qcs57\" (UID: \"18e69ffe-f523-4858-9f42-6f7d85a590a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.248590 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18e69ffe-f523-4858-9f42-6f7d85a590a3-config-volume\") pod \"collect-profiles-29400930-qcs57\" (UID: \"18e69ffe-f523-4858-9f42-6f7d85a590a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.248658 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxwk9\" (UniqueName: \"kubernetes.io/projected/18e69ffe-f523-4858-9f42-6f7d85a590a3-kube-api-access-qxwk9\") pod \"collect-profiles-29400930-qcs57\" (UID: \"18e69ffe-f523-4858-9f42-6f7d85a590a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.350925 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18e69ffe-f523-4858-9f42-6f7d85a590a3-secret-volume\") pod \"collect-profiles-29400930-qcs57\" (UID: \"18e69ffe-f523-4858-9f42-6f7d85a590a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.351133 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18e69ffe-f523-4858-9f42-6f7d85a590a3-config-volume\") pod \"collect-profiles-29400930-qcs57\" (UID: \"18e69ffe-f523-4858-9f42-6f7d85a590a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.351285 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxwk9\" (UniqueName: \"kubernetes.io/projected/18e69ffe-f523-4858-9f42-6f7d85a590a3-kube-api-access-qxwk9\") pod \"collect-profiles-29400930-qcs57\" (UID: \"18e69ffe-f523-4858-9f42-6f7d85a590a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.351984 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18e69ffe-f523-4858-9f42-6f7d85a590a3-config-volume\") pod \"collect-profiles-29400930-qcs57\" (UID: \"18e69ffe-f523-4858-9f42-6f7d85a590a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.357156 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18e69ffe-f523-4858-9f42-6f7d85a590a3-secret-volume\") pod \"collect-profiles-29400930-qcs57\" (UID: \"18e69ffe-f523-4858-9f42-6f7d85a590a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.370082 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxwk9\" (UniqueName: \"kubernetes.io/projected/18e69ffe-f523-4858-9f42-6f7d85a590a3-kube-api-access-qxwk9\") pod \"collect-profiles-29400930-qcs57\" (UID: \"18e69ffe-f523-4858-9f42-6f7d85a590a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:00 crc kubenswrapper[4482]: I1125 07:30:00.484106 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:01 crc kubenswrapper[4482]: I1125 07:30:01.007209 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57"] Nov 25 07:30:01 crc kubenswrapper[4482]: I1125 07:30:01.342683 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" event={"ID":"18e69ffe-f523-4858-9f42-6f7d85a590a3","Type":"ContainerStarted","Data":"458db17c88bfbd211181dc4a38c60cf866df53d27b5e826bae0eebaec2e88400"} Nov 25 07:30:01 crc kubenswrapper[4482]: I1125 07:30:01.343604 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" event={"ID":"18e69ffe-f523-4858-9f42-6f7d85a590a3","Type":"ContainerStarted","Data":"44c0a61f126958aec05ce8bdb64bd60f22dbece75450da852e155c1ef4cd13e8"} Nov 25 07:30:01 crc kubenswrapper[4482]: I1125 07:30:01.373973 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" podStartSLOduration=1.3739505410000001 podStartE2EDuration="1.373950541s" podCreationTimestamp="2025-11-25 07:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:30:01.366645375 +0000 UTC m=+2575.854876635" watchObservedRunningTime="2025-11-25 07:30:01.373950541 +0000 UTC m=+2575.862181800" Nov 25 07:30:02 crc kubenswrapper[4482]: I1125 07:30:02.357441 4482 generic.go:334] "Generic (PLEG): container finished" podID="18e69ffe-f523-4858-9f42-6f7d85a590a3" containerID="458db17c88bfbd211181dc4a38c60cf866df53d27b5e826bae0eebaec2e88400" exitCode=0 Nov 25 07:30:02 crc kubenswrapper[4482]: I1125 07:30:02.359822 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" event={"ID":"18e69ffe-f523-4858-9f42-6f7d85a590a3","Type":"ContainerDied","Data":"458db17c88bfbd211181dc4a38c60cf866df53d27b5e826bae0eebaec2e88400"} Nov 25 07:30:03 crc kubenswrapper[4482]: I1125 07:30:03.649746 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:03 crc kubenswrapper[4482]: I1125 07:30:03.832277 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxwk9\" (UniqueName: \"kubernetes.io/projected/18e69ffe-f523-4858-9f42-6f7d85a590a3-kube-api-access-qxwk9\") pod \"18e69ffe-f523-4858-9f42-6f7d85a590a3\" (UID: \"18e69ffe-f523-4858-9f42-6f7d85a590a3\") " Nov 25 07:30:03 crc kubenswrapper[4482]: I1125 07:30:03.832589 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18e69ffe-f523-4858-9f42-6f7d85a590a3-secret-volume\") pod \"18e69ffe-f523-4858-9f42-6f7d85a590a3\" (UID: \"18e69ffe-f523-4858-9f42-6f7d85a590a3\") " Nov 25 07:30:03 crc kubenswrapper[4482]: I1125 07:30:03.833397 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18e69ffe-f523-4858-9f42-6f7d85a590a3-config-volume\") pod \"18e69ffe-f523-4858-9f42-6f7d85a590a3\" (UID: \"18e69ffe-f523-4858-9f42-6f7d85a590a3\") " Nov 25 07:30:03 crc kubenswrapper[4482]: I1125 07:30:03.834486 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18e69ffe-f523-4858-9f42-6f7d85a590a3-config-volume" (OuterVolumeSpecName: "config-volume") pod "18e69ffe-f523-4858-9f42-6f7d85a590a3" (UID: "18e69ffe-f523-4858-9f42-6f7d85a590a3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:30:03 crc kubenswrapper[4482]: I1125 07:30:03.840073 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e69ffe-f523-4858-9f42-6f7d85a590a3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "18e69ffe-f523-4858-9f42-6f7d85a590a3" (UID: "18e69ffe-f523-4858-9f42-6f7d85a590a3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:30:03 crc kubenswrapper[4482]: I1125 07:30:03.840437 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e69ffe-f523-4858-9f42-6f7d85a590a3-kube-api-access-qxwk9" (OuterVolumeSpecName: "kube-api-access-qxwk9") pod "18e69ffe-f523-4858-9f42-6f7d85a590a3" (UID: "18e69ffe-f523-4858-9f42-6f7d85a590a3"). InnerVolumeSpecName "kube-api-access-qxwk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:30:03 crc kubenswrapper[4482]: I1125 07:30:03.936504 4482 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18e69ffe-f523-4858-9f42-6f7d85a590a3-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:03 crc kubenswrapper[4482]: I1125 07:30:03.936537 4482 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18e69ffe-f523-4858-9f42-6f7d85a590a3-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:03 crc kubenswrapper[4482]: I1125 07:30:03.936547 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxwk9\" (UniqueName: \"kubernetes.io/projected/18e69ffe-f523-4858-9f42-6f7d85a590a3-kube-api-access-qxwk9\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:04 crc kubenswrapper[4482]: I1125 07:30:04.381436 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" event={"ID":"18e69ffe-f523-4858-9f42-6f7d85a590a3","Type":"ContainerDied","Data":"44c0a61f126958aec05ce8bdb64bd60f22dbece75450da852e155c1ef4cd13e8"} Nov 25 07:30:04 crc kubenswrapper[4482]: I1125 07:30:04.382294 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44c0a61f126958aec05ce8bdb64bd60f22dbece75450da852e155c1ef4cd13e8" Nov 25 07:30:04 crc kubenswrapper[4482]: I1125 07:30:04.381512 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57" Nov 25 07:30:04 crc kubenswrapper[4482]: I1125 07:30:04.441664 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr"] Nov 25 07:30:04 crc kubenswrapper[4482]: I1125 07:30:04.450045 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400885-b4rtr"] Nov 25 07:30:05 crc kubenswrapper[4482]: I1125 07:30:05.842577 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff92469-ca47-4359-b56a-8df7332739ab" path="/var/lib/kubelet/pods/9ff92469-ca47-4359-b56a-8df7332739ab/volumes" Nov 25 07:30:40 crc kubenswrapper[4482]: I1125 07:30:40.133220 4482 scope.go:117] "RemoveContainer" containerID="15fbe8f652383d0e7eda94bc0e38826dbb0cd557ed7d2c674bd037ed6e133196" Nov 25 07:31:09 crc kubenswrapper[4482]: I1125 07:31:09.117799 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:31:09 crc kubenswrapper[4482]: I1125 07:31:09.118824 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:31:39 crc kubenswrapper[4482]: I1125 07:31:39.117384 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:31:39 crc kubenswrapper[4482]: I1125 07:31:39.117695 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:32:09 crc kubenswrapper[4482]: I1125 07:32:09.117465 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:32:09 crc kubenswrapper[4482]: I1125 07:32:09.118603 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:32:09 crc kubenswrapper[4482]: I1125 07:32:09.118651 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 07:32:09 crc kubenswrapper[4482]: I1125 07:32:09.119932 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18e4c2c5821fe7617b4339bbb03fb79a5a37e5b95cbf929202dd3482f0e7421f"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:32:09 crc kubenswrapper[4482]: I1125 07:32:09.120072 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://18e4c2c5821fe7617b4339bbb03fb79a5a37e5b95cbf929202dd3482f0e7421f" gracePeriod=600 Nov 25 07:32:09 crc kubenswrapper[4482]: I1125 07:32:09.491928 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="18e4c2c5821fe7617b4339bbb03fb79a5a37e5b95cbf929202dd3482f0e7421f" exitCode=0 Nov 25 07:32:09 crc kubenswrapper[4482]: I1125 07:32:09.492010 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"18e4c2c5821fe7617b4339bbb03fb79a5a37e5b95cbf929202dd3482f0e7421f"} Nov 25 07:32:09 crc kubenswrapper[4482]: I1125 07:32:09.492449 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c"} Nov 25 07:32:09 crc kubenswrapper[4482]: I1125 07:32:09.492548 4482 scope.go:117] "RemoveContainer" containerID="dcd11c706f78a65c6973fb041d769119b6b60263eabcc39869e9ee66ce88c78a" Nov 25 07:33:06 crc kubenswrapper[4482]: I1125 07:33:06.918671 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q96kz"] Nov 25 07:33:06 crc kubenswrapper[4482]: E1125 07:33:06.924313 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e69ffe-f523-4858-9f42-6f7d85a590a3" containerName="collect-profiles" Nov 25 07:33:06 crc kubenswrapper[4482]: I1125 07:33:06.924338 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e69ffe-f523-4858-9f42-6f7d85a590a3" containerName="collect-profiles" Nov 25 07:33:06 crc kubenswrapper[4482]: I1125 07:33:06.924567 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e69ffe-f523-4858-9f42-6f7d85a590a3" containerName="collect-profiles" Nov 25 07:33:06 crc kubenswrapper[4482]: I1125 07:33:06.931242 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q96kz"] Nov 25 07:33:06 crc kubenswrapper[4482]: I1125 07:33:06.931349 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:07 crc kubenswrapper[4482]: I1125 07:33:07.012836 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz958\" (UniqueName: \"kubernetes.io/projected/1f668583-b062-474a-8c04-6a6d4bc6bb6c-kube-api-access-jz958\") pod \"redhat-marketplace-q96kz\" (UID: \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\") " pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:07 crc kubenswrapper[4482]: I1125 07:33:07.012947 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f668583-b062-474a-8c04-6a6d4bc6bb6c-catalog-content\") pod \"redhat-marketplace-q96kz\" (UID: \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\") " pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:07 crc kubenswrapper[4482]: I1125 07:33:07.013033 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f668583-b062-474a-8c04-6a6d4bc6bb6c-utilities\") pod \"redhat-marketplace-q96kz\" (UID: \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\") " pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:07 crc kubenswrapper[4482]: I1125 07:33:07.115711 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f668583-b062-474a-8c04-6a6d4bc6bb6c-catalog-content\") pod \"redhat-marketplace-q96kz\" (UID: \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\") " pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:07 crc kubenswrapper[4482]: I1125 07:33:07.115816 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f668583-b062-474a-8c04-6a6d4bc6bb6c-utilities\") pod \"redhat-marketplace-q96kz\" (UID: \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\") " pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:07 crc kubenswrapper[4482]: I1125 07:33:07.115940 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz958\" (UniqueName: \"kubernetes.io/projected/1f668583-b062-474a-8c04-6a6d4bc6bb6c-kube-api-access-jz958\") pod \"redhat-marketplace-q96kz\" (UID: \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\") " pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:07 crc kubenswrapper[4482]: I1125 07:33:07.117015 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f668583-b062-474a-8c04-6a6d4bc6bb6c-catalog-content\") pod \"redhat-marketplace-q96kz\" (UID: \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\") " pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:07 crc kubenswrapper[4482]: I1125 07:33:07.117051 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f668583-b062-474a-8c04-6a6d4bc6bb6c-utilities\") pod \"redhat-marketplace-q96kz\" (UID: \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\") " pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:07 crc kubenswrapper[4482]: I1125 07:33:07.135144 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz958\" (UniqueName: \"kubernetes.io/projected/1f668583-b062-474a-8c04-6a6d4bc6bb6c-kube-api-access-jz958\") pod \"redhat-marketplace-q96kz\" (UID: \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\") " pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:07 crc kubenswrapper[4482]: I1125 07:33:07.245715 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:07 crc kubenswrapper[4482]: I1125 07:33:07.946056 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q96kz"] Nov 25 07:33:08 crc kubenswrapper[4482]: I1125 07:33:08.927391 4482 generic.go:334] "Generic (PLEG): container finished" podID="1f668583-b062-474a-8c04-6a6d4bc6bb6c" containerID="de9efea510738565a4f05f8719a7a3fb76b4fc7e44bb68afcd2adee44842845f" exitCode=0 Nov 25 07:33:08 crc kubenswrapper[4482]: I1125 07:33:08.927537 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q96kz" event={"ID":"1f668583-b062-474a-8c04-6a6d4bc6bb6c","Type":"ContainerDied","Data":"de9efea510738565a4f05f8719a7a3fb76b4fc7e44bb68afcd2adee44842845f"} Nov 25 07:33:08 crc kubenswrapper[4482]: I1125 07:33:08.927888 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q96kz" event={"ID":"1f668583-b062-474a-8c04-6a6d4bc6bb6c","Type":"ContainerStarted","Data":"1f631bab8f178d53ff0951ac267f1ad01c57bae7611eb9f7d998cd0fae3cad46"} Nov 25 07:33:08 crc kubenswrapper[4482]: I1125 07:33:08.930653 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:33:09 crc kubenswrapper[4482]: I1125 07:33:09.963889 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q96kz" event={"ID":"1f668583-b062-474a-8c04-6a6d4bc6bb6c","Type":"ContainerStarted","Data":"6ebfb04238ee0b7c73583b6559c87be304eae3d2ea234dc67863b79286f3f5c7"} Nov 25 07:33:10 crc kubenswrapper[4482]: I1125 07:33:10.976250 4482 generic.go:334] "Generic (PLEG): container finished" podID="1f668583-b062-474a-8c04-6a6d4bc6bb6c" containerID="6ebfb04238ee0b7c73583b6559c87be304eae3d2ea234dc67863b79286f3f5c7" exitCode=0 Nov 25 07:33:10 crc kubenswrapper[4482]: I1125 07:33:10.976311 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q96kz" event={"ID":"1f668583-b062-474a-8c04-6a6d4bc6bb6c","Type":"ContainerDied","Data":"6ebfb04238ee0b7c73583b6559c87be304eae3d2ea234dc67863b79286f3f5c7"} Nov 25 07:33:11 crc kubenswrapper[4482]: I1125 07:33:11.986664 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q96kz" event={"ID":"1f668583-b062-474a-8c04-6a6d4bc6bb6c","Type":"ContainerStarted","Data":"5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14"} Nov 25 07:33:12 crc kubenswrapper[4482]: I1125 07:33:12.004969 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q96kz" podStartSLOduration=3.461340128 podStartE2EDuration="6.004951809s" podCreationTimestamp="2025-11-25 07:33:06 +0000 UTC" firstStartedPulling="2025-11-25 07:33:08.929267916 +0000 UTC m=+2763.417499174" lastFinishedPulling="2025-11-25 07:33:11.472879596 +0000 UTC m=+2765.961110855" observedRunningTime="2025-11-25 07:33:12.002770729 +0000 UTC m=+2766.491001988" watchObservedRunningTime="2025-11-25 07:33:12.004951809 +0000 UTC m=+2766.493183067" Nov 25 07:33:17 crc kubenswrapper[4482]: I1125 07:33:17.246425 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:17 crc kubenswrapper[4482]: I1125 07:33:17.247844 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:17 crc kubenswrapper[4482]: I1125 07:33:17.291610 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:18 crc kubenswrapper[4482]: I1125 07:33:18.066229 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:18 crc kubenswrapper[4482]: I1125 07:33:18.104036 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q96kz"] Nov 25 07:33:20 crc kubenswrapper[4482]: I1125 07:33:20.046928 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q96kz" podUID="1f668583-b062-474a-8c04-6a6d4bc6bb6c" containerName="registry-server" containerID="cri-o://5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14" gracePeriod=2 Nov 25 07:33:20 crc kubenswrapper[4482]: I1125 07:33:20.607039 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:20 crc kubenswrapper[4482]: I1125 07:33:20.766159 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f668583-b062-474a-8c04-6a6d4bc6bb6c-catalog-content\") pod \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\" (UID: \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\") " Nov 25 07:33:20 crc kubenswrapper[4482]: I1125 07:33:20.766418 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f668583-b062-474a-8c04-6a6d4bc6bb6c-utilities\") pod \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\" (UID: \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\") " Nov 25 07:33:20 crc kubenswrapper[4482]: I1125 07:33:20.766556 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz958\" (UniqueName: \"kubernetes.io/projected/1f668583-b062-474a-8c04-6a6d4bc6bb6c-kube-api-access-jz958\") pod \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\" (UID: \"1f668583-b062-474a-8c04-6a6d4bc6bb6c\") " Nov 25 07:33:20 crc kubenswrapper[4482]: I1125 07:33:20.767717 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f668583-b062-474a-8c04-6a6d4bc6bb6c-utilities" (OuterVolumeSpecName: "utilities") pod "1f668583-b062-474a-8c04-6a6d4bc6bb6c" (UID: "1f668583-b062-474a-8c04-6a6d4bc6bb6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:33:20 crc kubenswrapper[4482]: I1125 07:33:20.772277 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f668583-b062-474a-8c04-6a6d4bc6bb6c-kube-api-access-jz958" (OuterVolumeSpecName: "kube-api-access-jz958") pod "1f668583-b062-474a-8c04-6a6d4bc6bb6c" (UID: "1f668583-b062-474a-8c04-6a6d4bc6bb6c"). InnerVolumeSpecName "kube-api-access-jz958". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:33:20 crc kubenswrapper[4482]: I1125 07:33:20.779412 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f668583-b062-474a-8c04-6a6d4bc6bb6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f668583-b062-474a-8c04-6a6d4bc6bb6c" (UID: "1f668583-b062-474a-8c04-6a6d4bc6bb6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:33:20 crc kubenswrapper[4482]: I1125 07:33:20.868963 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz958\" (UniqueName: \"kubernetes.io/projected/1f668583-b062-474a-8c04-6a6d4bc6bb6c-kube-api-access-jz958\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:20 crc kubenswrapper[4482]: I1125 07:33:20.869006 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f668583-b062-474a-8c04-6a6d4bc6bb6c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:20 crc kubenswrapper[4482]: I1125 07:33:20.869016 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f668583-b062-474a-8c04-6a6d4bc6bb6c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.055257 4482 generic.go:334] "Generic (PLEG): container finished" podID="1f668583-b062-474a-8c04-6a6d4bc6bb6c" containerID="5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14" exitCode=0 Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.055300 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q96kz" event={"ID":"1f668583-b062-474a-8c04-6a6d4bc6bb6c","Type":"ContainerDied","Data":"5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14"} Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.055310 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q96kz" Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.055331 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q96kz" event={"ID":"1f668583-b062-474a-8c04-6a6d4bc6bb6c","Type":"ContainerDied","Data":"1f631bab8f178d53ff0951ac267f1ad01c57bae7611eb9f7d998cd0fae3cad46"} Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.055348 4482 scope.go:117] "RemoveContainer" containerID="5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14" Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.071918 4482 scope.go:117] "RemoveContainer" containerID="6ebfb04238ee0b7c73583b6559c87be304eae3d2ea234dc67863b79286f3f5c7" Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.097290 4482 scope.go:117] "RemoveContainer" containerID="de9efea510738565a4f05f8719a7a3fb76b4fc7e44bb68afcd2adee44842845f" Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.100691 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q96kz"] Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.108607 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q96kz"] Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.126163 4482 scope.go:117] "RemoveContainer" containerID="5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14" Nov 25 07:33:21 crc kubenswrapper[4482]: E1125 07:33:21.129071 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14\": container with ID starting with 5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14 not found: ID does not exist" containerID="5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14" Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.129119 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14"} err="failed to get container status \"5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14\": rpc error: code = NotFound desc = could not find container \"5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14\": container with ID starting with 5a426e82951c8b2fbd1966f64c7fd1d54a5a3ac9b701a3769ffc9616f969bc14 not found: ID does not exist" Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.129140 4482 scope.go:117] "RemoveContainer" containerID="6ebfb04238ee0b7c73583b6559c87be304eae3d2ea234dc67863b79286f3f5c7" Nov 25 07:33:21 crc kubenswrapper[4482]: E1125 07:33:21.129602 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ebfb04238ee0b7c73583b6559c87be304eae3d2ea234dc67863b79286f3f5c7\": container with ID starting with 6ebfb04238ee0b7c73583b6559c87be304eae3d2ea234dc67863b79286f3f5c7 not found: ID does not exist" containerID="6ebfb04238ee0b7c73583b6559c87be304eae3d2ea234dc67863b79286f3f5c7" Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.129714 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebfb04238ee0b7c73583b6559c87be304eae3d2ea234dc67863b79286f3f5c7"} err="failed to get container status \"6ebfb04238ee0b7c73583b6559c87be304eae3d2ea234dc67863b79286f3f5c7\": rpc error: code = NotFound desc = could not find container \"6ebfb04238ee0b7c73583b6559c87be304eae3d2ea234dc67863b79286f3f5c7\": container with ID starting with 6ebfb04238ee0b7c73583b6559c87be304eae3d2ea234dc67863b79286f3f5c7 not found: ID does not exist" Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.129810 4482 scope.go:117] "RemoveContainer" containerID="de9efea510738565a4f05f8719a7a3fb76b4fc7e44bb68afcd2adee44842845f" Nov 25 07:33:21 crc kubenswrapper[4482]: E1125 07:33:21.130289 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9efea510738565a4f05f8719a7a3fb76b4fc7e44bb68afcd2adee44842845f\": container with ID starting with de9efea510738565a4f05f8719a7a3fb76b4fc7e44bb68afcd2adee44842845f not found: ID does not exist" containerID="de9efea510738565a4f05f8719a7a3fb76b4fc7e44bb68afcd2adee44842845f" Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.130323 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9efea510738565a4f05f8719a7a3fb76b4fc7e44bb68afcd2adee44842845f"} err="failed to get container status \"de9efea510738565a4f05f8719a7a3fb76b4fc7e44bb68afcd2adee44842845f\": rpc error: code = NotFound desc = could not find container \"de9efea510738565a4f05f8719a7a3fb76b4fc7e44bb68afcd2adee44842845f\": container with ID starting with de9efea510738565a4f05f8719a7a3fb76b4fc7e44bb68afcd2adee44842845f not found: ID does not exist" Nov 25 07:33:21 crc kubenswrapper[4482]: I1125 07:33:21.838687 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f668583-b062-474a-8c04-6a6d4bc6bb6c" path="/var/lib/kubelet/pods/1f668583-b062-474a-8c04-6a6d4bc6bb6c/volumes" Nov 25 07:34:09 crc kubenswrapper[4482]: I1125 07:34:09.117961 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:34:09 crc kubenswrapper[4482]: I1125 07:34:09.118541 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:34:39 crc kubenswrapper[4482]: I1125 07:34:39.117776 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:34:39 crc kubenswrapper[4482]: I1125 07:34:39.118755 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:35:09 crc kubenswrapper[4482]: I1125 07:35:09.118242 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:35:09 crc kubenswrapper[4482]: I1125 07:35:09.118655 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:35:09 crc kubenswrapper[4482]: I1125 07:35:09.118690 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 07:35:09 crc kubenswrapper[4482]: I1125 07:35:09.119184 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:35:09 crc kubenswrapper[4482]: I1125 07:35:09.119231 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" gracePeriod=600 Nov 25 07:35:09 crc kubenswrapper[4482]: E1125 07:35:09.240759 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:35:09 crc kubenswrapper[4482]: I1125 07:35:09.873796 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" exitCode=0 Nov 25 07:35:09 crc kubenswrapper[4482]: I1125 07:35:09.873834 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c"} Nov 25 07:35:09 crc kubenswrapper[4482]: I1125 07:35:09.873863 4482 scope.go:117] "RemoveContainer" containerID="18e4c2c5821fe7617b4339bbb03fb79a5a37e5b95cbf929202dd3482f0e7421f" Nov 25 07:35:09 crc kubenswrapper[4482]: I1125 07:35:09.874248 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:35:09 crc kubenswrapper[4482]: E1125 07:35:09.874469 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:35:22 crc kubenswrapper[4482]: I1125 07:35:22.830444 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:35:22 crc kubenswrapper[4482]: E1125 07:35:22.830965 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:35:37 crc kubenswrapper[4482]: I1125 07:35:37.831022 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:35:37 crc kubenswrapper[4482]: E1125 07:35:37.831592 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:35:52 crc kubenswrapper[4482]: I1125 07:35:52.830576 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:35:52 crc kubenswrapper[4482]: E1125 07:35:52.831093 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:36:03 crc kubenswrapper[4482]: I1125 07:36:03.831550 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:36:03 crc kubenswrapper[4482]: E1125 07:36:03.832296 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.085494 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dr664"] Nov 25 07:36:12 crc kubenswrapper[4482]: E1125 07:36:12.086242 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f668583-b062-474a-8c04-6a6d4bc6bb6c" containerName="extract-utilities" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.086256 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f668583-b062-474a-8c04-6a6d4bc6bb6c" containerName="extract-utilities" Nov 25 07:36:12 crc kubenswrapper[4482]: E1125 07:36:12.086275 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f668583-b062-474a-8c04-6a6d4bc6bb6c" containerName="extract-content" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.086283 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f668583-b062-474a-8c04-6a6d4bc6bb6c" containerName="extract-content" Nov 25 07:36:12 crc kubenswrapper[4482]: E1125 07:36:12.086314 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f668583-b062-474a-8c04-6a6d4bc6bb6c" containerName="registry-server" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.086320 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f668583-b062-474a-8c04-6a6d4bc6bb6c" containerName="registry-server" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.086489 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f668583-b062-474a-8c04-6a6d4bc6bb6c" containerName="registry-server" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.087870 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.099816 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dr664"] Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.115655 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jmsl\" (UniqueName: \"kubernetes.io/projected/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-kube-api-access-8jmsl\") pod \"community-operators-dr664\" (UID: \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\") " pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.115880 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-catalog-content\") pod \"community-operators-dr664\" (UID: \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\") " pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.116007 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-utilities\") pod \"community-operators-dr664\" (UID: \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\") " pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.218421 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-catalog-content\") pod \"community-operators-dr664\" (UID: \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\") " pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.219063 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-utilities\") pod \"community-operators-dr664\" (UID: \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\") " pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.219529 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jmsl\" (UniqueName: \"kubernetes.io/projected/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-kube-api-access-8jmsl\") pod \"community-operators-dr664\" (UID: \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\") " pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.218983 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-catalog-content\") pod \"community-operators-dr664\" (UID: \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\") " pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.219396 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-utilities\") pod \"community-operators-dr664\" (UID: \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\") " pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.242832 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jmsl\" (UniqueName: \"kubernetes.io/projected/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-kube-api-access-8jmsl\") pod \"community-operators-dr664\" (UID: \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\") " pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.403448 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:12 crc kubenswrapper[4482]: I1125 07:36:12.921192 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dr664"] Nov 25 07:36:13 crc kubenswrapper[4482]: I1125 07:36:13.290788 4482 generic.go:334] "Generic (PLEG): container finished" podID="45fa90fb-9ffd-45b9-96b2-74b7e35185ed" containerID="b3d19df082083e1be7b7c08021dd3d9c91b42c91303b3e1885db9d930abb8713" exitCode=0 Nov 25 07:36:13 crc kubenswrapper[4482]: I1125 07:36:13.290893 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr664" event={"ID":"45fa90fb-9ffd-45b9-96b2-74b7e35185ed","Type":"ContainerDied","Data":"b3d19df082083e1be7b7c08021dd3d9c91b42c91303b3e1885db9d930abb8713"} Nov 25 07:36:13 crc kubenswrapper[4482]: I1125 07:36:13.292894 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr664" event={"ID":"45fa90fb-9ffd-45b9-96b2-74b7e35185ed","Type":"ContainerStarted","Data":"b37f06b626ff8d812203d3682dcf010b363f7cdf09c0f43a429e278f85cf5f29"} Nov 25 07:36:14 crc kubenswrapper[4482]: I1125 07:36:14.305349 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr664" event={"ID":"45fa90fb-9ffd-45b9-96b2-74b7e35185ed","Type":"ContainerStarted","Data":"2e4a28e1cc638b4b45c61e2defe49a0a64630c6b42b3396cef542214f2d95af4"} Nov 25 07:36:15 crc kubenswrapper[4482]: I1125 07:36:15.313411 4482 generic.go:334] "Generic (PLEG): container finished" podID="45fa90fb-9ffd-45b9-96b2-74b7e35185ed" containerID="2e4a28e1cc638b4b45c61e2defe49a0a64630c6b42b3396cef542214f2d95af4" exitCode=0 Nov 25 07:36:15 crc kubenswrapper[4482]: I1125 07:36:15.313466 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr664" event={"ID":"45fa90fb-9ffd-45b9-96b2-74b7e35185ed","Type":"ContainerDied","Data":"2e4a28e1cc638b4b45c61e2defe49a0a64630c6b42b3396cef542214f2d95af4"} Nov 25 07:36:15 crc kubenswrapper[4482]: I1125 07:36:15.840007 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:36:15 crc kubenswrapper[4482]: E1125 07:36:15.840635 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:36:16 crc kubenswrapper[4482]: I1125 07:36:16.327407 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr664" event={"ID":"45fa90fb-9ffd-45b9-96b2-74b7e35185ed","Type":"ContainerStarted","Data":"9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900"} Nov 25 07:36:16 crc kubenswrapper[4482]: I1125 07:36:16.347659 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dr664" podStartSLOduration=1.851683985 podStartE2EDuration="4.347639605s" podCreationTimestamp="2025-11-25 07:36:12 +0000 UTC" firstStartedPulling="2025-11-25 07:36:13.292201585 +0000 UTC m=+2947.780432844" lastFinishedPulling="2025-11-25 07:36:15.788157206 +0000 UTC m=+2950.276388464" observedRunningTime="2025-11-25 07:36:16.343552081 +0000 UTC m=+2950.831783340" watchObservedRunningTime="2025-11-25 07:36:16.347639605 +0000 UTC m=+2950.835870863" Nov 25 07:36:18 crc kubenswrapper[4482]: I1125 07:36:18.870335 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bdwnp"] Nov 25 07:36:18 crc kubenswrapper[4482]: I1125 07:36:18.873929 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:18 crc kubenswrapper[4482]: I1125 07:36:18.891769 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bdwnp"] Nov 25 07:36:18 crc kubenswrapper[4482]: I1125 07:36:18.892454 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf4c084-0c7a-4406-95a5-0ddb02428f61-utilities\") pod \"redhat-operators-bdwnp\" (UID: \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\") " pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:18 crc kubenswrapper[4482]: I1125 07:36:18.892649 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8gwl\" (UniqueName: \"kubernetes.io/projected/3bf4c084-0c7a-4406-95a5-0ddb02428f61-kube-api-access-q8gwl\") pod \"redhat-operators-bdwnp\" (UID: \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\") " pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:18 crc kubenswrapper[4482]: I1125 07:36:18.892781 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf4c084-0c7a-4406-95a5-0ddb02428f61-catalog-content\") pod \"redhat-operators-bdwnp\" (UID: \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\") " pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:19 crc kubenswrapper[4482]: I1125 07:36:19.009833 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8gwl\" (UniqueName: \"kubernetes.io/projected/3bf4c084-0c7a-4406-95a5-0ddb02428f61-kube-api-access-q8gwl\") pod \"redhat-operators-bdwnp\" (UID: \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\") " pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:19 crc kubenswrapper[4482]: I1125 07:36:19.009899 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf4c084-0c7a-4406-95a5-0ddb02428f61-catalog-content\") pod \"redhat-operators-bdwnp\" (UID: \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\") " pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:19 crc kubenswrapper[4482]: I1125 07:36:19.010012 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf4c084-0c7a-4406-95a5-0ddb02428f61-utilities\") pod \"redhat-operators-bdwnp\" (UID: \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\") " pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:19 crc kubenswrapper[4482]: I1125 07:36:19.010421 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf4c084-0c7a-4406-95a5-0ddb02428f61-utilities\") pod \"redhat-operators-bdwnp\" (UID: \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\") " pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:19 crc kubenswrapper[4482]: I1125 07:36:19.010486 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf4c084-0c7a-4406-95a5-0ddb02428f61-catalog-content\") pod \"redhat-operators-bdwnp\" (UID: \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\") " pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:19 crc kubenswrapper[4482]: I1125 07:36:19.033709 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8gwl\" (UniqueName: \"kubernetes.io/projected/3bf4c084-0c7a-4406-95a5-0ddb02428f61-kube-api-access-q8gwl\") pod \"redhat-operators-bdwnp\" (UID: \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\") " pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:19 crc kubenswrapper[4482]: I1125 07:36:19.191868 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:19 crc kubenswrapper[4482]: I1125 07:36:19.639212 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bdwnp"] Nov 25 07:36:19 crc kubenswrapper[4482]: W1125 07:36:19.647709 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bf4c084_0c7a_4406_95a5_0ddb02428f61.slice/crio-de3a1689dd1f4f358d7550c45c4ab1847f8335e12e3d586ed3cc7a39bb686d21 WatchSource:0}: Error finding container de3a1689dd1f4f358d7550c45c4ab1847f8335e12e3d586ed3cc7a39bb686d21: Status 404 returned error can't find the container with id de3a1689dd1f4f358d7550c45c4ab1847f8335e12e3d586ed3cc7a39bb686d21 Nov 25 07:36:20 crc kubenswrapper[4482]: I1125 07:36:20.383816 4482 generic.go:334] "Generic (PLEG): container finished" podID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" containerID="fbe301f6ab085f26c6338fad0199316c32616befb04b62f4c05b50f3027f40b2" exitCode=0 Nov 25 07:36:20 crc kubenswrapper[4482]: I1125 07:36:20.384195 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdwnp" event={"ID":"3bf4c084-0c7a-4406-95a5-0ddb02428f61","Type":"ContainerDied","Data":"fbe301f6ab085f26c6338fad0199316c32616befb04b62f4c05b50f3027f40b2"} Nov 25 07:36:20 crc kubenswrapper[4482]: I1125 07:36:20.384229 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdwnp" event={"ID":"3bf4c084-0c7a-4406-95a5-0ddb02428f61","Type":"ContainerStarted","Data":"de3a1689dd1f4f358d7550c45c4ab1847f8335e12e3d586ed3cc7a39bb686d21"} Nov 25 07:36:22 crc kubenswrapper[4482]: I1125 07:36:22.404374 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:22 crc kubenswrapper[4482]: I1125 07:36:22.411268 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:22 crc kubenswrapper[4482]: I1125 07:36:22.416839 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdwnp" event={"ID":"3bf4c084-0c7a-4406-95a5-0ddb02428f61","Type":"ContainerStarted","Data":"ea491cd751a2b13a4faec40f20dd43cb5dedc9dcf02ef26dd5dd4fb7264f6a83"} Nov 25 07:36:22 crc kubenswrapper[4482]: I1125 07:36:22.483286 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:23 crc kubenswrapper[4482]: I1125 07:36:23.458878 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:24 crc kubenswrapper[4482]: I1125 07:36:24.431877 4482 generic.go:334] "Generic (PLEG): container finished" podID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" containerID="ea491cd751a2b13a4faec40f20dd43cb5dedc9dcf02ef26dd5dd4fb7264f6a83" exitCode=0 Nov 25 07:36:24 crc kubenswrapper[4482]: I1125 07:36:24.431963 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdwnp" event={"ID":"3bf4c084-0c7a-4406-95a5-0ddb02428f61","Type":"ContainerDied","Data":"ea491cd751a2b13a4faec40f20dd43cb5dedc9dcf02ef26dd5dd4fb7264f6a83"} Nov 25 07:36:24 crc kubenswrapper[4482]: I1125 07:36:24.664021 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dr664"] Nov 25 07:36:25 crc kubenswrapper[4482]: I1125 07:36:25.440801 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdwnp" event={"ID":"3bf4c084-0c7a-4406-95a5-0ddb02428f61","Type":"ContainerStarted","Data":"1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7"} Nov 25 07:36:25 crc kubenswrapper[4482]: I1125 07:36:25.458910 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bdwnp" podStartSLOduration=2.945429967 podStartE2EDuration="7.458895928s" podCreationTimestamp="2025-11-25 07:36:18 +0000 UTC" firstStartedPulling="2025-11-25 07:36:20.386915423 +0000 UTC m=+2954.875146681" lastFinishedPulling="2025-11-25 07:36:24.900381383 +0000 UTC m=+2959.388612642" observedRunningTime="2025-11-25 07:36:25.452873678 +0000 UTC m=+2959.941104937" watchObservedRunningTime="2025-11-25 07:36:25.458895928 +0000 UTC m=+2959.947127187" Nov 25 07:36:26 crc kubenswrapper[4482]: I1125 07:36:26.445524 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dr664" podUID="45fa90fb-9ffd-45b9-96b2-74b7e35185ed" containerName="registry-server" containerID="cri-o://9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900" gracePeriod=2 Nov 25 07:36:26 crc kubenswrapper[4482]: I1125 07:36:26.831109 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:36:26 crc kubenswrapper[4482]: E1125 07:36:26.831716 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.166706 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.285233 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jmsl\" (UniqueName: \"kubernetes.io/projected/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-kube-api-access-8jmsl\") pod \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\" (UID: \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\") " Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.285517 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-utilities\") pod \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\" (UID: \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\") " Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.285678 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-catalog-content\") pod \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\" (UID: \"45fa90fb-9ffd-45b9-96b2-74b7e35185ed\") " Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.286242 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-utilities" (OuterVolumeSpecName: "utilities") pod "45fa90fb-9ffd-45b9-96b2-74b7e35185ed" (UID: "45fa90fb-9ffd-45b9-96b2-74b7e35185ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.286732 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.317358 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-kube-api-access-8jmsl" (OuterVolumeSpecName: "kube-api-access-8jmsl") pod "45fa90fb-9ffd-45b9-96b2-74b7e35185ed" (UID: "45fa90fb-9ffd-45b9-96b2-74b7e35185ed"). InnerVolumeSpecName "kube-api-access-8jmsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.333197 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45fa90fb-9ffd-45b9-96b2-74b7e35185ed" (UID: "45fa90fb-9ffd-45b9-96b2-74b7e35185ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.387989 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.388149 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jmsl\" (UniqueName: \"kubernetes.io/projected/45fa90fb-9ffd-45b9-96b2-74b7e35185ed-kube-api-access-8jmsl\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.454026 4482 generic.go:334] "Generic (PLEG): container finished" podID="45fa90fb-9ffd-45b9-96b2-74b7e35185ed" containerID="9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900" exitCode=0 Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.454067 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr664" event={"ID":"45fa90fb-9ffd-45b9-96b2-74b7e35185ed","Type":"ContainerDied","Data":"9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900"} Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.454094 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr664" event={"ID":"45fa90fb-9ffd-45b9-96b2-74b7e35185ed","Type":"ContainerDied","Data":"b37f06b626ff8d812203d3682dcf010b363f7cdf09c0f43a429e278f85cf5f29"} Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.454110 4482 scope.go:117] "RemoveContainer" containerID="9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.454247 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dr664" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.503405 4482 scope.go:117] "RemoveContainer" containerID="2e4a28e1cc638b4b45c61e2defe49a0a64630c6b42b3396cef542214f2d95af4" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.506100 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dr664"] Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.516727 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dr664"] Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.523381 4482 scope.go:117] "RemoveContainer" containerID="b3d19df082083e1be7b7c08021dd3d9c91b42c91303b3e1885db9d930abb8713" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.559796 4482 scope.go:117] "RemoveContainer" containerID="9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900" Nov 25 07:36:27 crc kubenswrapper[4482]: E1125 07:36:27.560719 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900\": container with ID starting with 9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900 not found: ID does not exist" containerID="9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.560766 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900"} err="failed to get container status \"9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900\": rpc error: code = NotFound desc = could not find container \"9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900\": container with ID starting with 9b2231a978da8c4df80ac84ebfeceb24f365bb9952b602b0aa955b8652946900 not found: ID does not exist" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.560797 4482 scope.go:117] "RemoveContainer" containerID="2e4a28e1cc638b4b45c61e2defe49a0a64630c6b42b3396cef542214f2d95af4" Nov 25 07:36:27 crc kubenswrapper[4482]: E1125 07:36:27.561131 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4a28e1cc638b4b45c61e2defe49a0a64630c6b42b3396cef542214f2d95af4\": container with ID starting with 2e4a28e1cc638b4b45c61e2defe49a0a64630c6b42b3396cef542214f2d95af4 not found: ID does not exist" containerID="2e4a28e1cc638b4b45c61e2defe49a0a64630c6b42b3396cef542214f2d95af4" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.561159 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4a28e1cc638b4b45c61e2defe49a0a64630c6b42b3396cef542214f2d95af4"} err="failed to get container status \"2e4a28e1cc638b4b45c61e2defe49a0a64630c6b42b3396cef542214f2d95af4\": rpc error: code = NotFound desc = could not find container \"2e4a28e1cc638b4b45c61e2defe49a0a64630c6b42b3396cef542214f2d95af4\": container with ID starting with 2e4a28e1cc638b4b45c61e2defe49a0a64630c6b42b3396cef542214f2d95af4 not found: ID does not exist" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.561206 4482 scope.go:117] "RemoveContainer" containerID="b3d19df082083e1be7b7c08021dd3d9c91b42c91303b3e1885db9d930abb8713" Nov 25 07:36:27 crc kubenswrapper[4482]: E1125 07:36:27.562025 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d19df082083e1be7b7c08021dd3d9c91b42c91303b3e1885db9d930abb8713\": container with ID starting with b3d19df082083e1be7b7c08021dd3d9c91b42c91303b3e1885db9d930abb8713 not found: ID does not exist" containerID="b3d19df082083e1be7b7c08021dd3d9c91b42c91303b3e1885db9d930abb8713" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.562060 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d19df082083e1be7b7c08021dd3d9c91b42c91303b3e1885db9d930abb8713"} err="failed to get container status \"b3d19df082083e1be7b7c08021dd3d9c91b42c91303b3e1885db9d930abb8713\": rpc error: code = NotFound desc = could not find container \"b3d19df082083e1be7b7c08021dd3d9c91b42c91303b3e1885db9d930abb8713\": container with ID starting with b3d19df082083e1be7b7c08021dd3d9c91b42c91303b3e1885db9d930abb8713 not found: ID does not exist" Nov 25 07:36:27 crc kubenswrapper[4482]: I1125 07:36:27.839207 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45fa90fb-9ffd-45b9-96b2-74b7e35185ed" path="/var/lib/kubelet/pods/45fa90fb-9ffd-45b9-96b2-74b7e35185ed/volumes" Nov 25 07:36:29 crc kubenswrapper[4482]: I1125 07:36:29.192676 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:29 crc kubenswrapper[4482]: I1125 07:36:29.193046 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:30 crc kubenswrapper[4482]: I1125 07:36:30.233091 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bdwnp" podUID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" containerName="registry-server" probeResult="failure" output=< Nov 25 07:36:30 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 07:36:30 crc kubenswrapper[4482]: > Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.063234 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9dlh4"] Nov 25 07:36:33 crc kubenswrapper[4482]: E1125 07:36:33.068436 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fa90fb-9ffd-45b9-96b2-74b7e35185ed" containerName="extract-utilities" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.068604 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fa90fb-9ffd-45b9-96b2-74b7e35185ed" containerName="extract-utilities" Nov 25 07:36:33 crc kubenswrapper[4482]: E1125 07:36:33.068701 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fa90fb-9ffd-45b9-96b2-74b7e35185ed" containerName="extract-content" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.068763 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fa90fb-9ffd-45b9-96b2-74b7e35185ed" containerName="extract-content" Nov 25 07:36:33 crc kubenswrapper[4482]: E1125 07:36:33.068833 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fa90fb-9ffd-45b9-96b2-74b7e35185ed" containerName="registry-server" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.068892 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fa90fb-9ffd-45b9-96b2-74b7e35185ed" containerName="registry-server" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.069264 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="45fa90fb-9ffd-45b9-96b2-74b7e35185ed" containerName="registry-server" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.073242 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dlh4"] Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.073437 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.092795 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/165c1496-3e41-4f65-b28d-4684721e74c9-catalog-content\") pod \"certified-operators-9dlh4\" (UID: \"165c1496-3e41-4f65-b28d-4684721e74c9\") " pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.092957 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/165c1496-3e41-4f65-b28d-4684721e74c9-utilities\") pod \"certified-operators-9dlh4\" (UID: \"165c1496-3e41-4f65-b28d-4684721e74c9\") " pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.093038 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c58w8\" (UniqueName: \"kubernetes.io/projected/165c1496-3e41-4f65-b28d-4684721e74c9-kube-api-access-c58w8\") pod \"certified-operators-9dlh4\" (UID: \"165c1496-3e41-4f65-b28d-4684721e74c9\") " pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.194485 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/165c1496-3e41-4f65-b28d-4684721e74c9-catalog-content\") pod \"certified-operators-9dlh4\" (UID: \"165c1496-3e41-4f65-b28d-4684721e74c9\") " pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.194561 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/165c1496-3e41-4f65-b28d-4684721e74c9-utilities\") pod \"certified-operators-9dlh4\" (UID: \"165c1496-3e41-4f65-b28d-4684721e74c9\") " pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.194595 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c58w8\" (UniqueName: \"kubernetes.io/projected/165c1496-3e41-4f65-b28d-4684721e74c9-kube-api-access-c58w8\") pod \"certified-operators-9dlh4\" (UID: \"165c1496-3e41-4f65-b28d-4684721e74c9\") " pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.194850 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/165c1496-3e41-4f65-b28d-4684721e74c9-catalog-content\") pod \"certified-operators-9dlh4\" (UID: \"165c1496-3e41-4f65-b28d-4684721e74c9\") " pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.195199 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/165c1496-3e41-4f65-b28d-4684721e74c9-utilities\") pod \"certified-operators-9dlh4\" (UID: \"165c1496-3e41-4f65-b28d-4684721e74c9\") " pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.222364 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c58w8\" (UniqueName: \"kubernetes.io/projected/165c1496-3e41-4f65-b28d-4684721e74c9-kube-api-access-c58w8\") pod \"certified-operators-9dlh4\" (UID: \"165c1496-3e41-4f65-b28d-4684721e74c9\") " pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:33 crc kubenswrapper[4482]: I1125 07:36:33.393627 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:34 crc kubenswrapper[4482]: I1125 07:36:34.411420 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dlh4"] Nov 25 07:36:34 crc kubenswrapper[4482]: I1125 07:36:34.503739 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dlh4" event={"ID":"165c1496-3e41-4f65-b28d-4684721e74c9","Type":"ContainerStarted","Data":"5bd1f5f92865fd8bf3344daef6594e9d545856c22cce743b1db438a8de3416da"} Nov 25 07:36:35 crc kubenswrapper[4482]: I1125 07:36:35.511533 4482 generic.go:334] "Generic (PLEG): container finished" podID="165c1496-3e41-4f65-b28d-4684721e74c9" containerID="36767b9be84add8c4d0a00a3d5c65d21e88a37ddd5d71b24f9ae2ff672bd23d4" exitCode=0 Nov 25 07:36:35 crc kubenswrapper[4482]: I1125 07:36:35.511637 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dlh4" event={"ID":"165c1496-3e41-4f65-b28d-4684721e74c9","Type":"ContainerDied","Data":"36767b9be84add8c4d0a00a3d5c65d21e88a37ddd5d71b24f9ae2ff672bd23d4"} Nov 25 07:36:36 crc kubenswrapper[4482]: I1125 07:36:36.520641 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dlh4" event={"ID":"165c1496-3e41-4f65-b28d-4684721e74c9","Type":"ContainerStarted","Data":"8b112f224584c7063a21a8bc680072532dbe14778cbe6012d5fb6a90e8cc4297"} Nov 25 07:36:37 crc kubenswrapper[4482]: I1125 07:36:37.527926 4482 generic.go:334] "Generic (PLEG): container finished" podID="165c1496-3e41-4f65-b28d-4684721e74c9" containerID="8b112f224584c7063a21a8bc680072532dbe14778cbe6012d5fb6a90e8cc4297" exitCode=0 Nov 25 07:36:37 crc kubenswrapper[4482]: I1125 07:36:37.528030 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dlh4" event={"ID":"165c1496-3e41-4f65-b28d-4684721e74c9","Type":"ContainerDied","Data":"8b112f224584c7063a21a8bc680072532dbe14778cbe6012d5fb6a90e8cc4297"} Nov 25 07:36:37 crc kubenswrapper[4482]: I1125 07:36:37.830778 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:36:37 crc kubenswrapper[4482]: E1125 07:36:37.831217 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:36:38 crc kubenswrapper[4482]: I1125 07:36:38.537734 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dlh4" event={"ID":"165c1496-3e41-4f65-b28d-4684721e74c9","Type":"ContainerStarted","Data":"0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed"} Nov 25 07:36:38 crc kubenswrapper[4482]: I1125 07:36:38.559922 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9dlh4" podStartSLOduration=3.071746241 podStartE2EDuration="5.559906729s" podCreationTimestamp="2025-11-25 07:36:33 +0000 UTC" firstStartedPulling="2025-11-25 07:36:35.513880007 +0000 UTC m=+2970.002111266" lastFinishedPulling="2025-11-25 07:36:38.002040495 +0000 UTC m=+2972.490271754" observedRunningTime="2025-11-25 07:36:38.555039446 +0000 UTC m=+2973.043270704" watchObservedRunningTime="2025-11-25 07:36:38.559906729 +0000 UTC m=+2973.048137988" Nov 25 07:36:39 crc kubenswrapper[4482]: I1125 07:36:39.230122 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:39 crc kubenswrapper[4482]: I1125 07:36:39.272733 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:40 crc kubenswrapper[4482]: I1125 07:36:40.425226 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bdwnp"] Nov 25 07:36:40 crc kubenswrapper[4482]: I1125 07:36:40.552856 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bdwnp" podUID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" containerName="registry-server" containerID="cri-o://1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7" gracePeriod=2 Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.086083 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.255556 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf4c084-0c7a-4406-95a5-0ddb02428f61-utilities\") pod \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\" (UID: \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\") " Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.256097 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf4c084-0c7a-4406-95a5-0ddb02428f61-utilities" (OuterVolumeSpecName: "utilities") pod "3bf4c084-0c7a-4406-95a5-0ddb02428f61" (UID: "3bf4c084-0c7a-4406-95a5-0ddb02428f61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.256260 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8gwl\" (UniqueName: \"kubernetes.io/projected/3bf4c084-0c7a-4406-95a5-0ddb02428f61-kube-api-access-q8gwl\") pod \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\" (UID: \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\") " Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.257494 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf4c084-0c7a-4406-95a5-0ddb02428f61-catalog-content\") pod \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\" (UID: \"3bf4c084-0c7a-4406-95a5-0ddb02428f61\") " Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.258200 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf4c084-0c7a-4406-95a5-0ddb02428f61-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.263390 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf4c084-0c7a-4406-95a5-0ddb02428f61-kube-api-access-q8gwl" (OuterVolumeSpecName: "kube-api-access-q8gwl") pod "3bf4c084-0c7a-4406-95a5-0ddb02428f61" (UID: "3bf4c084-0c7a-4406-95a5-0ddb02428f61"). InnerVolumeSpecName "kube-api-access-q8gwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.323098 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf4c084-0c7a-4406-95a5-0ddb02428f61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bf4c084-0c7a-4406-95a5-0ddb02428f61" (UID: "3bf4c084-0c7a-4406-95a5-0ddb02428f61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.361027 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8gwl\" (UniqueName: \"kubernetes.io/projected/3bf4c084-0c7a-4406-95a5-0ddb02428f61-kube-api-access-q8gwl\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.361063 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf4c084-0c7a-4406-95a5-0ddb02428f61-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.561723 4482 generic.go:334] "Generic (PLEG): container finished" podID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" containerID="1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7" exitCode=0 Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.561765 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdwnp" event={"ID":"3bf4c084-0c7a-4406-95a5-0ddb02428f61","Type":"ContainerDied","Data":"1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7"} Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.561798 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdwnp" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.561815 4482 scope.go:117] "RemoveContainer" containerID="1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.561803 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdwnp" event={"ID":"3bf4c084-0c7a-4406-95a5-0ddb02428f61","Type":"ContainerDied","Data":"de3a1689dd1f4f358d7550c45c4ab1847f8335e12e3d586ed3cc7a39bb686d21"} Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.588414 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bdwnp"] Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.593141 4482 scope.go:117] "RemoveContainer" containerID="ea491cd751a2b13a4faec40f20dd43cb5dedc9dcf02ef26dd5dd4fb7264f6a83" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.597150 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bdwnp"] Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.613640 4482 scope.go:117] "RemoveContainer" containerID="fbe301f6ab085f26c6338fad0199316c32616befb04b62f4c05b50f3027f40b2" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.660485 4482 scope.go:117] "RemoveContainer" containerID="1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7" Nov 25 07:36:41 crc kubenswrapper[4482]: E1125 07:36:41.661158 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7\": container with ID starting with 1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7 not found: ID does not exist" containerID="1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.661220 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7"} err="failed to get container status \"1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7\": rpc error: code = NotFound desc = could not find container \"1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7\": container with ID starting with 1f40bf493801e7d51c5f5a57ebb3cea3ef9a6cc763329b1eba9212e9c41433d7 not found: ID does not exist" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.661245 4482 scope.go:117] "RemoveContainer" containerID="ea491cd751a2b13a4faec40f20dd43cb5dedc9dcf02ef26dd5dd4fb7264f6a83" Nov 25 07:36:41 crc kubenswrapper[4482]: E1125 07:36:41.661576 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea491cd751a2b13a4faec40f20dd43cb5dedc9dcf02ef26dd5dd4fb7264f6a83\": container with ID starting with ea491cd751a2b13a4faec40f20dd43cb5dedc9dcf02ef26dd5dd4fb7264f6a83 not found: ID does not exist" containerID="ea491cd751a2b13a4faec40f20dd43cb5dedc9dcf02ef26dd5dd4fb7264f6a83" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.661608 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea491cd751a2b13a4faec40f20dd43cb5dedc9dcf02ef26dd5dd4fb7264f6a83"} err="failed to get container status \"ea491cd751a2b13a4faec40f20dd43cb5dedc9dcf02ef26dd5dd4fb7264f6a83\": rpc error: code = NotFound desc = could not find container \"ea491cd751a2b13a4faec40f20dd43cb5dedc9dcf02ef26dd5dd4fb7264f6a83\": container with ID starting with ea491cd751a2b13a4faec40f20dd43cb5dedc9dcf02ef26dd5dd4fb7264f6a83 not found: ID does not exist" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.661630 4482 scope.go:117] "RemoveContainer" containerID="fbe301f6ab085f26c6338fad0199316c32616befb04b62f4c05b50f3027f40b2" Nov 25 07:36:41 crc kubenswrapper[4482]: E1125 07:36:41.661829 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe301f6ab085f26c6338fad0199316c32616befb04b62f4c05b50f3027f40b2\": container with ID starting with fbe301f6ab085f26c6338fad0199316c32616befb04b62f4c05b50f3027f40b2 not found: ID does not exist" containerID="fbe301f6ab085f26c6338fad0199316c32616befb04b62f4c05b50f3027f40b2" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.661866 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe301f6ab085f26c6338fad0199316c32616befb04b62f4c05b50f3027f40b2"} err="failed to get container status \"fbe301f6ab085f26c6338fad0199316c32616befb04b62f4c05b50f3027f40b2\": rpc error: code = NotFound desc = could not find container \"fbe301f6ab085f26c6338fad0199316c32616befb04b62f4c05b50f3027f40b2\": container with ID starting with fbe301f6ab085f26c6338fad0199316c32616befb04b62f4c05b50f3027f40b2 not found: ID does not exist" Nov 25 07:36:41 crc kubenswrapper[4482]: I1125 07:36:41.839649 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" path="/var/lib/kubelet/pods/3bf4c084-0c7a-4406-95a5-0ddb02428f61/volumes" Nov 25 07:36:43 crc kubenswrapper[4482]: I1125 07:36:43.393800 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:43 crc kubenswrapper[4482]: I1125 07:36:43.394211 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:43 crc kubenswrapper[4482]: I1125 07:36:43.430060 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:43 crc kubenswrapper[4482]: I1125 07:36:43.616400 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:45 crc kubenswrapper[4482]: I1125 07:36:45.625430 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dlh4"] Nov 25 07:36:45 crc kubenswrapper[4482]: I1125 07:36:45.625985 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9dlh4" podUID="165c1496-3e41-4f65-b28d-4684721e74c9" containerName="registry-server" containerID="cri-o://0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed" gracePeriod=2 Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.115257 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.253525 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/165c1496-3e41-4f65-b28d-4684721e74c9-utilities\") pod \"165c1496-3e41-4f65-b28d-4684721e74c9\" (UID: \"165c1496-3e41-4f65-b28d-4684721e74c9\") " Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.253625 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c58w8\" (UniqueName: \"kubernetes.io/projected/165c1496-3e41-4f65-b28d-4684721e74c9-kube-api-access-c58w8\") pod \"165c1496-3e41-4f65-b28d-4684721e74c9\" (UID: \"165c1496-3e41-4f65-b28d-4684721e74c9\") " Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.253810 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/165c1496-3e41-4f65-b28d-4684721e74c9-catalog-content\") pod \"165c1496-3e41-4f65-b28d-4684721e74c9\" (UID: \"165c1496-3e41-4f65-b28d-4684721e74c9\") " Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.255064 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/165c1496-3e41-4f65-b28d-4684721e74c9-utilities" (OuterVolumeSpecName: "utilities") pod "165c1496-3e41-4f65-b28d-4684721e74c9" (UID: "165c1496-3e41-4f65-b28d-4684721e74c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.260051 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/165c1496-3e41-4f65-b28d-4684721e74c9-kube-api-access-c58w8" (OuterVolumeSpecName: "kube-api-access-c58w8") pod "165c1496-3e41-4f65-b28d-4684721e74c9" (UID: "165c1496-3e41-4f65-b28d-4684721e74c9"). InnerVolumeSpecName "kube-api-access-c58w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.296204 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/165c1496-3e41-4f65-b28d-4684721e74c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "165c1496-3e41-4f65-b28d-4684721e74c9" (UID: "165c1496-3e41-4f65-b28d-4684721e74c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.356383 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/165c1496-3e41-4f65-b28d-4684721e74c9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.356433 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c58w8\" (UniqueName: \"kubernetes.io/projected/165c1496-3e41-4f65-b28d-4684721e74c9-kube-api-access-c58w8\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.356446 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/165c1496-3e41-4f65-b28d-4684721e74c9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.605828 4482 generic.go:334] "Generic (PLEG): container finished" podID="165c1496-3e41-4f65-b28d-4684721e74c9" containerID="0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed" exitCode=0 Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.605862 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dlh4" event={"ID":"165c1496-3e41-4f65-b28d-4684721e74c9","Type":"ContainerDied","Data":"0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed"} Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.605885 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dlh4" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.605904 4482 scope.go:117] "RemoveContainer" containerID="0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.605890 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dlh4" event={"ID":"165c1496-3e41-4f65-b28d-4684721e74c9","Type":"ContainerDied","Data":"5bd1f5f92865fd8bf3344daef6594e9d545856c22cce743b1db438a8de3416da"} Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.622589 4482 scope.go:117] "RemoveContainer" containerID="8b112f224584c7063a21a8bc680072532dbe14778cbe6012d5fb6a90e8cc4297" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.632433 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dlh4"] Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.639405 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9dlh4"] Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.648382 4482 scope.go:117] "RemoveContainer" containerID="36767b9be84add8c4d0a00a3d5c65d21e88a37ddd5d71b24f9ae2ff672bd23d4" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.683675 4482 scope.go:117] "RemoveContainer" containerID="0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed" Nov 25 07:36:46 crc kubenswrapper[4482]: E1125 07:36:46.684461 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed\": container with ID starting with 0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed not found: ID does not exist" containerID="0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.684518 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed"} err="failed to get container status \"0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed\": rpc error: code = NotFound desc = could not find container \"0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed\": container with ID starting with 0e8ed1d23173979b5f523fc4356b76c80ebbacf7f7c650500ca85dddbb96e2ed not found: ID does not exist" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.684542 4482 scope.go:117] "RemoveContainer" containerID="8b112f224584c7063a21a8bc680072532dbe14778cbe6012d5fb6a90e8cc4297" Nov 25 07:36:46 crc kubenswrapper[4482]: E1125 07:36:46.684923 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b112f224584c7063a21a8bc680072532dbe14778cbe6012d5fb6a90e8cc4297\": container with ID starting with 8b112f224584c7063a21a8bc680072532dbe14778cbe6012d5fb6a90e8cc4297 not found: ID does not exist" containerID="8b112f224584c7063a21a8bc680072532dbe14778cbe6012d5fb6a90e8cc4297" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.684945 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b112f224584c7063a21a8bc680072532dbe14778cbe6012d5fb6a90e8cc4297"} err="failed to get container status \"8b112f224584c7063a21a8bc680072532dbe14778cbe6012d5fb6a90e8cc4297\": rpc error: code = NotFound desc = could not find container \"8b112f224584c7063a21a8bc680072532dbe14778cbe6012d5fb6a90e8cc4297\": container with ID starting with 8b112f224584c7063a21a8bc680072532dbe14778cbe6012d5fb6a90e8cc4297 not found: ID does not exist" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.684958 4482 scope.go:117] "RemoveContainer" containerID="36767b9be84add8c4d0a00a3d5c65d21e88a37ddd5d71b24f9ae2ff672bd23d4" Nov 25 07:36:46 crc kubenswrapper[4482]: E1125 07:36:46.685269 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36767b9be84add8c4d0a00a3d5c65d21e88a37ddd5d71b24f9ae2ff672bd23d4\": container with ID starting with 36767b9be84add8c4d0a00a3d5c65d21e88a37ddd5d71b24f9ae2ff672bd23d4 not found: ID does not exist" containerID="36767b9be84add8c4d0a00a3d5c65d21e88a37ddd5d71b24f9ae2ff672bd23d4" Nov 25 07:36:46 crc kubenswrapper[4482]: I1125 07:36:46.685300 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36767b9be84add8c4d0a00a3d5c65d21e88a37ddd5d71b24f9ae2ff672bd23d4"} err="failed to get container status \"36767b9be84add8c4d0a00a3d5c65d21e88a37ddd5d71b24f9ae2ff672bd23d4\": rpc error: code = NotFound desc = could not find container \"36767b9be84add8c4d0a00a3d5c65d21e88a37ddd5d71b24f9ae2ff672bd23d4\": container with ID starting with 36767b9be84add8c4d0a00a3d5c65d21e88a37ddd5d71b24f9ae2ff672bd23d4 not found: ID does not exist" Nov 25 07:36:47 crc kubenswrapper[4482]: I1125 07:36:47.839731 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="165c1496-3e41-4f65-b28d-4684721e74c9" path="/var/lib/kubelet/pods/165c1496-3e41-4f65-b28d-4684721e74c9/volumes" Nov 25 07:36:49 crc kubenswrapper[4482]: I1125 07:36:49.830472 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:36:49 crc kubenswrapper[4482]: E1125 07:36:49.831063 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:37:02 crc kubenswrapper[4482]: I1125 07:37:02.831846 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:37:02 crc kubenswrapper[4482]: E1125 07:37:02.832621 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:37:17 crc kubenswrapper[4482]: I1125 07:37:17.830886 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:37:17 crc kubenswrapper[4482]: E1125 07:37:17.831495 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:37:31 crc kubenswrapper[4482]: I1125 07:37:31.831050 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:37:31 crc kubenswrapper[4482]: E1125 07:37:31.831580 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:37:42 crc kubenswrapper[4482]: I1125 07:37:42.831772 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:37:42 crc kubenswrapper[4482]: E1125 07:37:42.832679 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:37:53 crc kubenswrapper[4482]: I1125 07:37:53.830972 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:37:53 crc kubenswrapper[4482]: E1125 07:37:53.831831 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:38:04 crc kubenswrapper[4482]: I1125 07:38:04.830558 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:38:04 crc kubenswrapper[4482]: E1125 07:38:04.831567 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:38:18 crc kubenswrapper[4482]: I1125 07:38:18.831028 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:38:18 crc kubenswrapper[4482]: E1125 07:38:18.831729 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:38:31 crc kubenswrapper[4482]: I1125 07:38:31.830766 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:38:31 crc kubenswrapper[4482]: E1125 07:38:31.831352 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:38:42 crc kubenswrapper[4482]: I1125 07:38:42.831259 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:38:42 crc kubenswrapper[4482]: E1125 07:38:42.833697 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:38:56 crc kubenswrapper[4482]: I1125 07:38:56.830499 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:38:56 crc kubenswrapper[4482]: E1125 07:38:56.831085 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:39:09 crc kubenswrapper[4482]: I1125 07:39:09.830968 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:39:09 crc kubenswrapper[4482]: E1125 07:39:09.831465 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:39:24 crc kubenswrapper[4482]: I1125 07:39:24.831369 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:39:24 crc kubenswrapper[4482]: E1125 07:39:24.831914 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:39:38 crc kubenswrapper[4482]: I1125 07:39:38.830567 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:39:38 crc kubenswrapper[4482]: E1125 07:39:38.831090 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:39:50 crc kubenswrapper[4482]: I1125 07:39:50.831053 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:39:50 crc kubenswrapper[4482]: E1125 07:39:50.831756 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:40:03 crc kubenswrapper[4482]: I1125 07:40:03.830917 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:40:03 crc kubenswrapper[4482]: E1125 07:40:03.831535 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:40:16 crc kubenswrapper[4482]: I1125 07:40:16.830894 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:40:17 crc kubenswrapper[4482]: I1125 07:40:17.079515 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"3ce3631ef3681014543864be97a86fd66fac2ab88fbb1ecc2f8ef2fc997ce1c7"} Nov 25 07:42:39 crc kubenswrapper[4482]: I1125 07:42:39.118077 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:42:39 crc kubenswrapper[4482]: I1125 07:42:39.120399 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.117870 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.118268 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.762735 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4b9s5"] Nov 25 07:43:09 crc kubenswrapper[4482]: E1125 07:43:09.763669 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165c1496-3e41-4f65-b28d-4684721e74c9" containerName="registry-server" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.763692 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="165c1496-3e41-4f65-b28d-4684721e74c9" containerName="registry-server" Nov 25 07:43:09 crc kubenswrapper[4482]: E1125 07:43:09.764252 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" containerName="extract-content" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.764267 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" containerName="extract-content" Nov 25 07:43:09 crc kubenswrapper[4482]: E1125 07:43:09.764299 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165c1496-3e41-4f65-b28d-4684721e74c9" containerName="extract-content" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.764305 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="165c1496-3e41-4f65-b28d-4684721e74c9" containerName="extract-content" Nov 25 07:43:09 crc kubenswrapper[4482]: E1125 07:43:09.764316 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" containerName="registry-server" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.764321 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" containerName="registry-server" Nov 25 07:43:09 crc kubenswrapper[4482]: E1125 07:43:09.764345 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" containerName="extract-utilities" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.764350 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" containerName="extract-utilities" Nov 25 07:43:09 crc kubenswrapper[4482]: E1125 07:43:09.764358 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165c1496-3e41-4f65-b28d-4684721e74c9" containerName="extract-utilities" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.764363 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="165c1496-3e41-4f65-b28d-4684721e74c9" containerName="extract-utilities" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.765817 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="165c1496-3e41-4f65-b28d-4684721e74c9" containerName="registry-server" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.765846 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf4c084-0c7a-4406-95a5-0ddb02428f61" containerName="registry-server" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.770812 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.785332 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b9s5"] Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.848033 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpqfg\" (UniqueName: \"kubernetes.io/projected/4f84e88a-3073-4827-a58c-e577e1cd4fa8-kube-api-access-vpqfg\") pod \"redhat-marketplace-4b9s5\" (UID: \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\") " pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.848293 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f84e88a-3073-4827-a58c-e577e1cd4fa8-catalog-content\") pod \"redhat-marketplace-4b9s5\" (UID: \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\") " pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.848317 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f84e88a-3073-4827-a58c-e577e1cd4fa8-utilities\") pod \"redhat-marketplace-4b9s5\" (UID: \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\") " pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.950468 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f84e88a-3073-4827-a58c-e577e1cd4fa8-catalog-content\") pod \"redhat-marketplace-4b9s5\" (UID: \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\") " pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.950510 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f84e88a-3073-4827-a58c-e577e1cd4fa8-utilities\") pod \"redhat-marketplace-4b9s5\" (UID: \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\") " pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.950570 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpqfg\" (UniqueName: \"kubernetes.io/projected/4f84e88a-3073-4827-a58c-e577e1cd4fa8-kube-api-access-vpqfg\") pod \"redhat-marketplace-4b9s5\" (UID: \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\") " pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.953258 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f84e88a-3073-4827-a58c-e577e1cd4fa8-utilities\") pod \"redhat-marketplace-4b9s5\" (UID: \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\") " pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.953865 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f84e88a-3073-4827-a58c-e577e1cd4fa8-catalog-content\") pod \"redhat-marketplace-4b9s5\" (UID: \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\") " pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:09 crc kubenswrapper[4482]: I1125 07:43:09.971711 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpqfg\" (UniqueName: \"kubernetes.io/projected/4f84e88a-3073-4827-a58c-e577e1cd4fa8-kube-api-access-vpqfg\") pod \"redhat-marketplace-4b9s5\" (UID: \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\") " pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:10 crc kubenswrapper[4482]: I1125 07:43:10.091855 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:10 crc kubenswrapper[4482]: I1125 07:43:10.812884 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b9s5"] Nov 25 07:43:11 crc kubenswrapper[4482]: I1125 07:43:11.311205 4482 generic.go:334] "Generic (PLEG): container finished" podID="4f84e88a-3073-4827-a58c-e577e1cd4fa8" containerID="140ced5815f225a27879caa263556e91b38d2b7eb443121c58165e5073a09b94" exitCode=0 Nov 25 07:43:11 crc kubenswrapper[4482]: I1125 07:43:11.311543 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9s5" event={"ID":"4f84e88a-3073-4827-a58c-e577e1cd4fa8","Type":"ContainerDied","Data":"140ced5815f225a27879caa263556e91b38d2b7eb443121c58165e5073a09b94"} Nov 25 07:43:11 crc kubenswrapper[4482]: I1125 07:43:11.311616 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9s5" event={"ID":"4f84e88a-3073-4827-a58c-e577e1cd4fa8","Type":"ContainerStarted","Data":"4ed5a463bf0e0076bb9ae9851e89d26a04b9bee62a973f81acb991610e14321e"} Nov 25 07:43:11 crc kubenswrapper[4482]: I1125 07:43:11.314942 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:43:12 crc kubenswrapper[4482]: I1125 07:43:12.320147 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9s5" event={"ID":"4f84e88a-3073-4827-a58c-e577e1cd4fa8","Type":"ContainerStarted","Data":"c4ac8cae1df765745ec10bc9b3975dcddd6c8b90de81ed45ff19c5276cbb52af"} Nov 25 07:43:13 crc kubenswrapper[4482]: I1125 07:43:13.328548 4482 generic.go:334] "Generic (PLEG): container finished" podID="4f84e88a-3073-4827-a58c-e577e1cd4fa8" containerID="c4ac8cae1df765745ec10bc9b3975dcddd6c8b90de81ed45ff19c5276cbb52af" exitCode=0 Nov 25 07:43:13 crc kubenswrapper[4482]: I1125 07:43:13.328608 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9s5" event={"ID":"4f84e88a-3073-4827-a58c-e577e1cd4fa8","Type":"ContainerDied","Data":"c4ac8cae1df765745ec10bc9b3975dcddd6c8b90de81ed45ff19c5276cbb52af"} Nov 25 07:43:14 crc kubenswrapper[4482]: I1125 07:43:14.336525 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9s5" event={"ID":"4f84e88a-3073-4827-a58c-e577e1cd4fa8","Type":"ContainerStarted","Data":"59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326"} Nov 25 07:43:14 crc kubenswrapper[4482]: I1125 07:43:14.355949 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4b9s5" podStartSLOduration=2.842980586 podStartE2EDuration="5.355482732s" podCreationTimestamp="2025-11-25 07:43:09 +0000 UTC" firstStartedPulling="2025-11-25 07:43:11.313334279 +0000 UTC m=+3365.801565538" lastFinishedPulling="2025-11-25 07:43:13.825836425 +0000 UTC m=+3368.314067684" observedRunningTime="2025-11-25 07:43:14.347796807 +0000 UTC m=+3368.836028066" watchObservedRunningTime="2025-11-25 07:43:14.355482732 +0000 UTC m=+3368.843713982" Nov 25 07:43:20 crc kubenswrapper[4482]: I1125 07:43:20.092645 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:20 crc kubenswrapper[4482]: I1125 07:43:20.093530 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:20 crc kubenswrapper[4482]: I1125 07:43:20.131084 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:20 crc kubenswrapper[4482]: I1125 07:43:20.416541 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:20 crc kubenswrapper[4482]: I1125 07:43:20.454067 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b9s5"] Nov 25 07:43:22 crc kubenswrapper[4482]: I1125 07:43:22.396261 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4b9s5" podUID="4f84e88a-3073-4827-a58c-e577e1cd4fa8" containerName="registry-server" containerID="cri-o://59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326" gracePeriod=2 Nov 25 07:43:22 crc kubenswrapper[4482]: I1125 07:43:22.863078 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.006952 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f84e88a-3073-4827-a58c-e577e1cd4fa8-utilities\") pod \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\" (UID: \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\") " Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.007365 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpqfg\" (UniqueName: \"kubernetes.io/projected/4f84e88a-3073-4827-a58c-e577e1cd4fa8-kube-api-access-vpqfg\") pod \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\" (UID: \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\") " Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.007707 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f84e88a-3073-4827-a58c-e577e1cd4fa8-catalog-content\") pod \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\" (UID: \"4f84e88a-3073-4827-a58c-e577e1cd4fa8\") " Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.009494 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f84e88a-3073-4827-a58c-e577e1cd4fa8-utilities" (OuterVolumeSpecName: "utilities") pod "4f84e88a-3073-4827-a58c-e577e1cd4fa8" (UID: "4f84e88a-3073-4827-a58c-e577e1cd4fa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.016008 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f84e88a-3073-4827-a58c-e577e1cd4fa8-kube-api-access-vpqfg" (OuterVolumeSpecName: "kube-api-access-vpqfg") pod "4f84e88a-3073-4827-a58c-e577e1cd4fa8" (UID: "4f84e88a-3073-4827-a58c-e577e1cd4fa8"). InnerVolumeSpecName "kube-api-access-vpqfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.023888 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f84e88a-3073-4827-a58c-e577e1cd4fa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f84e88a-3073-4827-a58c-e577e1cd4fa8" (UID: "4f84e88a-3073-4827-a58c-e577e1cd4fa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.111937 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpqfg\" (UniqueName: \"kubernetes.io/projected/4f84e88a-3073-4827-a58c-e577e1cd4fa8-kube-api-access-vpqfg\") on node \"crc\" DevicePath \"\"" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.112297 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f84e88a-3073-4827-a58c-e577e1cd4fa8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.112313 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f84e88a-3073-4827-a58c-e577e1cd4fa8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.406335 4482 generic.go:334] "Generic (PLEG): container finished" podID="4f84e88a-3073-4827-a58c-e577e1cd4fa8" containerID="59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326" exitCode=0 Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.406389 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4b9s5" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.406434 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9s5" event={"ID":"4f84e88a-3073-4827-a58c-e577e1cd4fa8","Type":"ContainerDied","Data":"59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326"} Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.407536 4482 scope.go:117] "RemoveContainer" containerID="59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.407892 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b9s5" event={"ID":"4f84e88a-3073-4827-a58c-e577e1cd4fa8","Type":"ContainerDied","Data":"4ed5a463bf0e0076bb9ae9851e89d26a04b9bee62a973f81acb991610e14321e"} Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.440716 4482 scope.go:117] "RemoveContainer" containerID="c4ac8cae1df765745ec10bc9b3975dcddd6c8b90de81ed45ff19c5276cbb52af" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.451149 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b9s5"] Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.459987 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b9s5"] Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.470697 4482 scope.go:117] "RemoveContainer" containerID="140ced5815f225a27879caa263556e91b38d2b7eb443121c58165e5073a09b94" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.491937 4482 scope.go:117] "RemoveContainer" containerID="59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326" Nov 25 07:43:23 crc kubenswrapper[4482]: E1125 07:43:23.493985 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326\": container with ID starting with 59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326 not found: ID does not exist" containerID="59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.494500 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326"} err="failed to get container status \"59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326\": rpc error: code = NotFound desc = could not find container \"59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326\": container with ID starting with 59d983f4da6dfbf878225b78914b3aee826a85fbc4471bd554120fbeaab83326 not found: ID does not exist" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.494540 4482 scope.go:117] "RemoveContainer" containerID="c4ac8cae1df765745ec10bc9b3975dcddd6c8b90de81ed45ff19c5276cbb52af" Nov 25 07:43:23 crc kubenswrapper[4482]: E1125 07:43:23.495023 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ac8cae1df765745ec10bc9b3975dcddd6c8b90de81ed45ff19c5276cbb52af\": container with ID starting with c4ac8cae1df765745ec10bc9b3975dcddd6c8b90de81ed45ff19c5276cbb52af not found: ID does not exist" containerID="c4ac8cae1df765745ec10bc9b3975dcddd6c8b90de81ed45ff19c5276cbb52af" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.495055 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ac8cae1df765745ec10bc9b3975dcddd6c8b90de81ed45ff19c5276cbb52af"} err="failed to get container status \"c4ac8cae1df765745ec10bc9b3975dcddd6c8b90de81ed45ff19c5276cbb52af\": rpc error: code = NotFound desc = could not find container \"c4ac8cae1df765745ec10bc9b3975dcddd6c8b90de81ed45ff19c5276cbb52af\": container with ID starting with c4ac8cae1df765745ec10bc9b3975dcddd6c8b90de81ed45ff19c5276cbb52af not found: ID does not exist" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.495078 4482 scope.go:117] "RemoveContainer" containerID="140ced5815f225a27879caa263556e91b38d2b7eb443121c58165e5073a09b94" Nov 25 07:43:23 crc kubenswrapper[4482]: E1125 07:43:23.495619 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140ced5815f225a27879caa263556e91b38d2b7eb443121c58165e5073a09b94\": container with ID starting with 140ced5815f225a27879caa263556e91b38d2b7eb443121c58165e5073a09b94 not found: ID does not exist" containerID="140ced5815f225a27879caa263556e91b38d2b7eb443121c58165e5073a09b94" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.495673 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140ced5815f225a27879caa263556e91b38d2b7eb443121c58165e5073a09b94"} err="failed to get container status \"140ced5815f225a27879caa263556e91b38d2b7eb443121c58165e5073a09b94\": rpc error: code = NotFound desc = could not find container \"140ced5815f225a27879caa263556e91b38d2b7eb443121c58165e5073a09b94\": container with ID starting with 140ced5815f225a27879caa263556e91b38d2b7eb443121c58165e5073a09b94 not found: ID does not exist" Nov 25 07:43:23 crc kubenswrapper[4482]: I1125 07:43:23.841925 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f84e88a-3073-4827-a58c-e577e1cd4fa8" path="/var/lib/kubelet/pods/4f84e88a-3073-4827-a58c-e577e1cd4fa8/volumes" Nov 25 07:43:39 crc kubenswrapper[4482]: I1125 07:43:39.118097 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:43:39 crc kubenswrapper[4482]: I1125 07:43:39.118684 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:43:39 crc kubenswrapper[4482]: I1125 07:43:39.118737 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 07:43:39 crc kubenswrapper[4482]: I1125 07:43:39.119713 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ce3631ef3681014543864be97a86fd66fac2ab88fbb1ecc2f8ef2fc997ce1c7"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:43:39 crc kubenswrapper[4482]: I1125 07:43:39.119784 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://3ce3631ef3681014543864be97a86fd66fac2ab88fbb1ecc2f8ef2fc997ce1c7" gracePeriod=600 Nov 25 07:43:39 crc kubenswrapper[4482]: I1125 07:43:39.525000 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="3ce3631ef3681014543864be97a86fd66fac2ab88fbb1ecc2f8ef2fc997ce1c7" exitCode=0 Nov 25 07:43:39 crc kubenswrapper[4482]: I1125 07:43:39.525072 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"3ce3631ef3681014543864be97a86fd66fac2ab88fbb1ecc2f8ef2fc997ce1c7"} Nov 25 07:43:39 crc kubenswrapper[4482]: I1125 07:43:39.525293 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b"} Nov 25 07:43:39 crc kubenswrapper[4482]: I1125 07:43:39.525313 4482 scope.go:117] "RemoveContainer" containerID="49013f6443f476d2fd835316fc41025ac2bba0fe76c7aa5b4f2b955bd78b564c" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.141192 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv"] Nov 25 07:45:00 crc kubenswrapper[4482]: E1125 07:45:00.142293 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f84e88a-3073-4827-a58c-e577e1cd4fa8" containerName="registry-server" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.142307 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f84e88a-3073-4827-a58c-e577e1cd4fa8" containerName="registry-server" Nov 25 07:45:00 crc kubenswrapper[4482]: E1125 07:45:00.142330 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f84e88a-3073-4827-a58c-e577e1cd4fa8" containerName="extract-content" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.142336 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f84e88a-3073-4827-a58c-e577e1cd4fa8" containerName="extract-content" Nov 25 07:45:00 crc kubenswrapper[4482]: E1125 07:45:00.142348 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f84e88a-3073-4827-a58c-e577e1cd4fa8" containerName="extract-utilities" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.142357 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f84e88a-3073-4827-a58c-e577e1cd4fa8" containerName="extract-utilities" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.142584 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f84e88a-3073-4827-a58c-e577e1cd4fa8" containerName="registry-server" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.143245 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.147810 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv"] Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.151253 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.173904 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.273938 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-secret-volume\") pod \"collect-profiles-29400945-p5qrv\" (UID: \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.273980 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-config-volume\") pod \"collect-profiles-29400945-p5qrv\" (UID: \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.274555 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjm85\" (UniqueName: \"kubernetes.io/projected/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-kube-api-access-mjm85\") pod \"collect-profiles-29400945-p5qrv\" (UID: \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.375928 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjm85\" (UniqueName: \"kubernetes.io/projected/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-kube-api-access-mjm85\") pod \"collect-profiles-29400945-p5qrv\" (UID: \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.376060 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-secret-volume\") pod \"collect-profiles-29400945-p5qrv\" (UID: \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.376083 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-config-volume\") pod \"collect-profiles-29400945-p5qrv\" (UID: \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.377319 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-config-volume\") pod \"collect-profiles-29400945-p5qrv\" (UID: \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.383069 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-secret-volume\") pod \"collect-profiles-29400945-p5qrv\" (UID: \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.398907 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjm85\" (UniqueName: \"kubernetes.io/projected/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-kube-api-access-mjm85\") pod \"collect-profiles-29400945-p5qrv\" (UID: \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.467649 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:00 crc kubenswrapper[4482]: I1125 07:45:00.900476 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv"] Nov 25 07:45:01 crc kubenswrapper[4482]: I1125 07:45:01.118760 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" event={"ID":"c6231fe7-2d66-480b-93fb-5bb66a84dcaf","Type":"ContainerStarted","Data":"a7a94b36878e746b6641c9204bbce80c13a3d148db3cfe3d574abc5cdd339e5a"} Nov 25 07:45:01 crc kubenswrapper[4482]: I1125 07:45:01.119082 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" event={"ID":"c6231fe7-2d66-480b-93fb-5bb66a84dcaf","Type":"ContainerStarted","Data":"1d8ed5c375c5f42527c8618fdaacbf68f641445fc654e5ed9da8a41b73531931"} Nov 25 07:45:01 crc kubenswrapper[4482]: I1125 07:45:01.132412 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" podStartSLOduration=1.132394825 podStartE2EDuration="1.132394825s" podCreationTimestamp="2025-11-25 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:45:01.129004164 +0000 UTC m=+3475.617235423" watchObservedRunningTime="2025-11-25 07:45:01.132394825 +0000 UTC m=+3475.620626083" Nov 25 07:45:02 crc kubenswrapper[4482]: I1125 07:45:02.165623 4482 generic.go:334] "Generic (PLEG): container finished" podID="c6231fe7-2d66-480b-93fb-5bb66a84dcaf" containerID="a7a94b36878e746b6641c9204bbce80c13a3d148db3cfe3d574abc5cdd339e5a" exitCode=0 Nov 25 07:45:02 crc kubenswrapper[4482]: I1125 07:45:02.166729 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" event={"ID":"c6231fe7-2d66-480b-93fb-5bb66a84dcaf","Type":"ContainerDied","Data":"a7a94b36878e746b6641c9204bbce80c13a3d148db3cfe3d574abc5cdd339e5a"} Nov 25 07:45:03 crc kubenswrapper[4482]: I1125 07:45:03.508131 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:03 crc kubenswrapper[4482]: I1125 07:45:03.546359 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-config-volume\") pod \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\" (UID: \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\") " Nov 25 07:45:03 crc kubenswrapper[4482]: I1125 07:45:03.546479 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-secret-volume\") pod \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\" (UID: \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\") " Nov 25 07:45:03 crc kubenswrapper[4482]: I1125 07:45:03.546586 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjm85\" (UniqueName: \"kubernetes.io/projected/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-kube-api-access-mjm85\") pod \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\" (UID: \"c6231fe7-2d66-480b-93fb-5bb66a84dcaf\") " Nov 25 07:45:03 crc kubenswrapper[4482]: I1125 07:45:03.547107 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-config-volume" (OuterVolumeSpecName: "config-volume") pod "c6231fe7-2d66-480b-93fb-5bb66a84dcaf" (UID: "c6231fe7-2d66-480b-93fb-5bb66a84dcaf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:45:03 crc kubenswrapper[4482]: I1125 07:45:03.554098 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c6231fe7-2d66-480b-93fb-5bb66a84dcaf" (UID: "c6231fe7-2d66-480b-93fb-5bb66a84dcaf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:45:03 crc kubenswrapper[4482]: I1125 07:45:03.554323 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-kube-api-access-mjm85" (OuterVolumeSpecName: "kube-api-access-mjm85") pod "c6231fe7-2d66-480b-93fb-5bb66a84dcaf" (UID: "c6231fe7-2d66-480b-93fb-5bb66a84dcaf"). InnerVolumeSpecName "kube-api-access-mjm85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:45:03 crc kubenswrapper[4482]: I1125 07:45:03.648703 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjm85\" (UniqueName: \"kubernetes.io/projected/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-kube-api-access-mjm85\") on node \"crc\" DevicePath \"\"" Nov 25 07:45:03 crc kubenswrapper[4482]: I1125 07:45:03.648730 4482 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:45:03 crc kubenswrapper[4482]: I1125 07:45:03.648739 4482 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6231fe7-2d66-480b-93fb-5bb66a84dcaf-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:45:03 crc kubenswrapper[4482]: E1125 07:45:03.967528 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6231fe7_2d66_480b_93fb_5bb66a84dcaf.slice/crio-1d8ed5c375c5f42527c8618fdaacbf68f641445fc654e5ed9da8a41b73531931\": RecentStats: unable to find data in memory cache]" Nov 25 07:45:04 crc kubenswrapper[4482]: I1125 07:45:04.184237 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" event={"ID":"c6231fe7-2d66-480b-93fb-5bb66a84dcaf","Type":"ContainerDied","Data":"1d8ed5c375c5f42527c8618fdaacbf68f641445fc654e5ed9da8a41b73531931"} Nov 25 07:45:04 crc kubenswrapper[4482]: I1125 07:45:04.184301 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d8ed5c375c5f42527c8618fdaacbf68f641445fc654e5ed9da8a41b73531931" Nov 25 07:45:04 crc kubenswrapper[4482]: I1125 07:45:04.184675 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv" Nov 25 07:45:04 crc kubenswrapper[4482]: I1125 07:45:04.205499 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz"] Nov 25 07:45:04 crc kubenswrapper[4482]: I1125 07:45:04.226651 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400900-p7wjz"] Nov 25 07:45:05 crc kubenswrapper[4482]: I1125 07:45:05.872585 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef458b3-5100-4773-8b07-ed066b2b29ee" path="/var/lib/kubelet/pods/4ef458b3-5100-4773-8b07-ed066b2b29ee/volumes" Nov 25 07:45:39 crc kubenswrapper[4482]: I1125 07:45:39.118404 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:45:39 crc kubenswrapper[4482]: I1125 07:45:39.119229 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:45:40 crc kubenswrapper[4482]: I1125 07:45:40.693608 4482 scope.go:117] "RemoveContainer" containerID="f98debb079f3ce73ab891e627ada742d3b36fe6821b27abc4678decacc1f480a" Nov 25 07:46:09 crc kubenswrapper[4482]: I1125 07:46:09.118058 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:46:09 crc kubenswrapper[4482]: I1125 07:46:09.118689 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.331663 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r2ckl"] Nov 25 07:46:27 crc kubenswrapper[4482]: E1125 07:46:27.332657 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6231fe7-2d66-480b-93fb-5bb66a84dcaf" containerName="collect-profiles" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.332673 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6231fe7-2d66-480b-93fb-5bb66a84dcaf" containerName="collect-profiles" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.332902 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6231fe7-2d66-480b-93fb-5bb66a84dcaf" containerName="collect-profiles" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.334265 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.341366 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r2ckl"] Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.526449 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tb2q\" (UniqueName: \"kubernetes.io/projected/049f855e-de90-41aa-99fc-fea8c09b42f9-kube-api-access-7tb2q\") pod \"community-operators-r2ckl\" (UID: \"049f855e-de90-41aa-99fc-fea8c09b42f9\") " pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.526791 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/049f855e-de90-41aa-99fc-fea8c09b42f9-catalog-content\") pod \"community-operators-r2ckl\" (UID: \"049f855e-de90-41aa-99fc-fea8c09b42f9\") " pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.526827 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/049f855e-de90-41aa-99fc-fea8c09b42f9-utilities\") pod \"community-operators-r2ckl\" (UID: \"049f855e-de90-41aa-99fc-fea8c09b42f9\") " pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.628442 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tb2q\" (UniqueName: \"kubernetes.io/projected/049f855e-de90-41aa-99fc-fea8c09b42f9-kube-api-access-7tb2q\") pod \"community-operators-r2ckl\" (UID: \"049f855e-de90-41aa-99fc-fea8c09b42f9\") " pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.628532 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/049f855e-de90-41aa-99fc-fea8c09b42f9-catalog-content\") pod \"community-operators-r2ckl\" (UID: \"049f855e-de90-41aa-99fc-fea8c09b42f9\") " pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.628619 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/049f855e-de90-41aa-99fc-fea8c09b42f9-utilities\") pod \"community-operators-r2ckl\" (UID: \"049f855e-de90-41aa-99fc-fea8c09b42f9\") " pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.629091 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/049f855e-de90-41aa-99fc-fea8c09b42f9-catalog-content\") pod \"community-operators-r2ckl\" (UID: \"049f855e-de90-41aa-99fc-fea8c09b42f9\") " pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.629150 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/049f855e-de90-41aa-99fc-fea8c09b42f9-utilities\") pod \"community-operators-r2ckl\" (UID: \"049f855e-de90-41aa-99fc-fea8c09b42f9\") " pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.653841 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tb2q\" (UniqueName: \"kubernetes.io/projected/049f855e-de90-41aa-99fc-fea8c09b42f9-kube-api-access-7tb2q\") pod \"community-operators-r2ckl\" (UID: \"049f855e-de90-41aa-99fc-fea8c09b42f9\") " pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:27 crc kubenswrapper[4482]: I1125 07:46:27.951692 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:28 crc kubenswrapper[4482]: I1125 07:46:28.360382 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r2ckl"] Nov 25 07:46:28 crc kubenswrapper[4482]: I1125 07:46:28.828192 4482 generic.go:334] "Generic (PLEG): container finished" podID="049f855e-de90-41aa-99fc-fea8c09b42f9" containerID="eca0156b5236d23f590fc9c89e12f5e2237a56f6b8b67b7d354e2ed5f003b6db" exitCode=0 Nov 25 07:46:28 crc kubenswrapper[4482]: I1125 07:46:28.828294 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2ckl" event={"ID":"049f855e-de90-41aa-99fc-fea8c09b42f9","Type":"ContainerDied","Data":"eca0156b5236d23f590fc9c89e12f5e2237a56f6b8b67b7d354e2ed5f003b6db"} Nov 25 07:46:28 crc kubenswrapper[4482]: I1125 07:46:28.828544 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2ckl" event={"ID":"049f855e-de90-41aa-99fc-fea8c09b42f9","Type":"ContainerStarted","Data":"63a25c47c344cfd95a00bdd11cb5202c3196d35eb26237b11b9f5af4649388d5"} Nov 25 07:46:29 crc kubenswrapper[4482]: I1125 07:46:29.839281 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2ckl" event={"ID":"049f855e-de90-41aa-99fc-fea8c09b42f9","Type":"ContainerStarted","Data":"c899a36319b7cf184c84c458731331392de1dadeecf2e00cf00cbe006ef6e10b"} Nov 25 07:46:30 crc kubenswrapper[4482]: I1125 07:46:30.848245 4482 generic.go:334] "Generic (PLEG): container finished" podID="049f855e-de90-41aa-99fc-fea8c09b42f9" containerID="c899a36319b7cf184c84c458731331392de1dadeecf2e00cf00cbe006ef6e10b" exitCode=0 Nov 25 07:46:30 crc kubenswrapper[4482]: I1125 07:46:30.848434 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2ckl" event={"ID":"049f855e-de90-41aa-99fc-fea8c09b42f9","Type":"ContainerDied","Data":"c899a36319b7cf184c84c458731331392de1dadeecf2e00cf00cbe006ef6e10b"} Nov 25 07:46:31 crc kubenswrapper[4482]: I1125 07:46:31.857873 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2ckl" event={"ID":"049f855e-de90-41aa-99fc-fea8c09b42f9","Type":"ContainerStarted","Data":"6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af"} Nov 25 07:46:37 crc kubenswrapper[4482]: I1125 07:46:37.952816 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:37 crc kubenswrapper[4482]: I1125 07:46:37.953294 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:37 crc kubenswrapper[4482]: I1125 07:46:37.990773 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:38 crc kubenswrapper[4482]: I1125 07:46:38.009882 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r2ckl" podStartSLOduration=8.490659918 podStartE2EDuration="11.009866237s" podCreationTimestamp="2025-11-25 07:46:27 +0000 UTC" firstStartedPulling="2025-11-25 07:46:28.830608078 +0000 UTC m=+3563.318839337" lastFinishedPulling="2025-11-25 07:46:31.349814397 +0000 UTC m=+3565.838045656" observedRunningTime="2025-11-25 07:46:31.872543453 +0000 UTC m=+3566.360774713" watchObservedRunningTime="2025-11-25 07:46:38.009866237 +0000 UTC m=+3572.498097486" Nov 25 07:46:38 crc kubenswrapper[4482]: I1125 07:46:38.947542 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:39 crc kubenswrapper[4482]: I1125 07:46:39.035704 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r2ckl"] Nov 25 07:46:39 crc kubenswrapper[4482]: I1125 07:46:39.118295 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:46:39 crc kubenswrapper[4482]: I1125 07:46:39.118378 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:46:39 crc kubenswrapper[4482]: I1125 07:46:39.118439 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 07:46:39 crc kubenswrapper[4482]: I1125 07:46:39.119373 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:46:39 crc kubenswrapper[4482]: I1125 07:46:39.119438 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" gracePeriod=600 Nov 25 07:46:39 crc kubenswrapper[4482]: E1125 07:46:39.239045 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:46:39 crc kubenswrapper[4482]: I1125 07:46:39.920814 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" exitCode=0 Nov 25 07:46:39 crc kubenswrapper[4482]: I1125 07:46:39.920873 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b"} Nov 25 07:46:39 crc kubenswrapper[4482]: I1125 07:46:39.921122 4482 scope.go:117] "RemoveContainer" containerID="3ce3631ef3681014543864be97a86fd66fac2ab88fbb1ecc2f8ef2fc997ce1c7" Nov 25 07:46:39 crc kubenswrapper[4482]: I1125 07:46:39.921776 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:46:39 crc kubenswrapper[4482]: E1125 07:46:39.922029 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:46:40 crc kubenswrapper[4482]: I1125 07:46:40.931117 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r2ckl" podUID="049f855e-de90-41aa-99fc-fea8c09b42f9" containerName="registry-server" containerID="cri-o://6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af" gracePeriod=2 Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.336807 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.391474 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tb2q\" (UniqueName: \"kubernetes.io/projected/049f855e-de90-41aa-99fc-fea8c09b42f9-kube-api-access-7tb2q\") pod \"049f855e-de90-41aa-99fc-fea8c09b42f9\" (UID: \"049f855e-de90-41aa-99fc-fea8c09b42f9\") " Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.391572 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/049f855e-de90-41aa-99fc-fea8c09b42f9-utilities\") pod \"049f855e-de90-41aa-99fc-fea8c09b42f9\" (UID: \"049f855e-de90-41aa-99fc-fea8c09b42f9\") " Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.392303 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/049f855e-de90-41aa-99fc-fea8c09b42f9-utilities" (OuterVolumeSpecName: "utilities") pod "049f855e-de90-41aa-99fc-fea8c09b42f9" (UID: "049f855e-de90-41aa-99fc-fea8c09b42f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.398945 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049f855e-de90-41aa-99fc-fea8c09b42f9-kube-api-access-7tb2q" (OuterVolumeSpecName: "kube-api-access-7tb2q") pod "049f855e-de90-41aa-99fc-fea8c09b42f9" (UID: "049f855e-de90-41aa-99fc-fea8c09b42f9"). InnerVolumeSpecName "kube-api-access-7tb2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.493640 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/049f855e-de90-41aa-99fc-fea8c09b42f9-catalog-content\") pod \"049f855e-de90-41aa-99fc-fea8c09b42f9\" (UID: \"049f855e-de90-41aa-99fc-fea8c09b42f9\") " Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.494666 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tb2q\" (UniqueName: \"kubernetes.io/projected/049f855e-de90-41aa-99fc-fea8c09b42f9-kube-api-access-7tb2q\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.494686 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/049f855e-de90-41aa-99fc-fea8c09b42f9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.531946 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/049f855e-de90-41aa-99fc-fea8c09b42f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "049f855e-de90-41aa-99fc-fea8c09b42f9" (UID: "049f855e-de90-41aa-99fc-fea8c09b42f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.595646 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/049f855e-de90-41aa-99fc-fea8c09b42f9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.943422 4482 generic.go:334] "Generic (PLEG): container finished" podID="049f855e-de90-41aa-99fc-fea8c09b42f9" containerID="6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af" exitCode=0 Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.943464 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2ckl" event={"ID":"049f855e-de90-41aa-99fc-fea8c09b42f9","Type":"ContainerDied","Data":"6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af"} Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.943482 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2ckl" Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.943497 4482 scope.go:117] "RemoveContainer" containerID="6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af" Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.943487 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2ckl" event={"ID":"049f855e-de90-41aa-99fc-fea8c09b42f9","Type":"ContainerDied","Data":"63a25c47c344cfd95a00bdd11cb5202c3196d35eb26237b11b9f5af4649388d5"} Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.959448 4482 scope.go:117] "RemoveContainer" containerID="c899a36319b7cf184c84c458731331392de1dadeecf2e00cf00cbe006ef6e10b" Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.976417 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r2ckl"] Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.978333 4482 scope.go:117] "RemoveContainer" containerID="eca0156b5236d23f590fc9c89e12f5e2237a56f6b8b67b7d354e2ed5f003b6db" Nov 25 07:46:41 crc kubenswrapper[4482]: I1125 07:46:41.979334 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r2ckl"] Nov 25 07:46:42 crc kubenswrapper[4482]: I1125 07:46:42.010183 4482 scope.go:117] "RemoveContainer" containerID="6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af" Nov 25 07:46:42 crc kubenswrapper[4482]: E1125 07:46:42.010644 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af\": container with ID starting with 6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af not found: ID does not exist" containerID="6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af" Nov 25 07:46:42 crc kubenswrapper[4482]: I1125 07:46:42.010684 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af"} err="failed to get container status \"6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af\": rpc error: code = NotFound desc = could not find container \"6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af\": container with ID starting with 6de0a24fb4eec52880db2dff45124edc1013ec6abe6057a8d6e3e5aa099103af not found: ID does not exist" Nov 25 07:46:42 crc kubenswrapper[4482]: I1125 07:46:42.010710 4482 scope.go:117] "RemoveContainer" containerID="c899a36319b7cf184c84c458731331392de1dadeecf2e00cf00cbe006ef6e10b" Nov 25 07:46:42 crc kubenswrapper[4482]: E1125 07:46:42.011059 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c899a36319b7cf184c84c458731331392de1dadeecf2e00cf00cbe006ef6e10b\": container with ID starting with c899a36319b7cf184c84c458731331392de1dadeecf2e00cf00cbe006ef6e10b not found: ID does not exist" containerID="c899a36319b7cf184c84c458731331392de1dadeecf2e00cf00cbe006ef6e10b" Nov 25 07:46:42 crc kubenswrapper[4482]: I1125 07:46:42.011094 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c899a36319b7cf184c84c458731331392de1dadeecf2e00cf00cbe006ef6e10b"} err="failed to get container status \"c899a36319b7cf184c84c458731331392de1dadeecf2e00cf00cbe006ef6e10b\": rpc error: code = NotFound desc = could not find container \"c899a36319b7cf184c84c458731331392de1dadeecf2e00cf00cbe006ef6e10b\": container with ID starting with c899a36319b7cf184c84c458731331392de1dadeecf2e00cf00cbe006ef6e10b not found: ID does not exist" Nov 25 07:46:42 crc kubenswrapper[4482]: I1125 07:46:42.011119 4482 scope.go:117] "RemoveContainer" containerID="eca0156b5236d23f590fc9c89e12f5e2237a56f6b8b67b7d354e2ed5f003b6db" Nov 25 07:46:42 crc kubenswrapper[4482]: E1125 07:46:42.011397 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca0156b5236d23f590fc9c89e12f5e2237a56f6b8b67b7d354e2ed5f003b6db\": container with ID starting with eca0156b5236d23f590fc9c89e12f5e2237a56f6b8b67b7d354e2ed5f003b6db not found: ID does not exist" containerID="eca0156b5236d23f590fc9c89e12f5e2237a56f6b8b67b7d354e2ed5f003b6db" Nov 25 07:46:42 crc kubenswrapper[4482]: I1125 07:46:42.011428 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca0156b5236d23f590fc9c89e12f5e2237a56f6b8b67b7d354e2ed5f003b6db"} err="failed to get container status \"eca0156b5236d23f590fc9c89e12f5e2237a56f6b8b67b7d354e2ed5f003b6db\": rpc error: code = NotFound desc = could not find container \"eca0156b5236d23f590fc9c89e12f5e2237a56f6b8b67b7d354e2ed5f003b6db\": container with ID starting with eca0156b5236d23f590fc9c89e12f5e2237a56f6b8b67b7d354e2ed5f003b6db not found: ID does not exist" Nov 25 07:46:43 crc kubenswrapper[4482]: I1125 07:46:43.840182 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="049f855e-de90-41aa-99fc-fea8c09b42f9" path="/var/lib/kubelet/pods/049f855e-de90-41aa-99fc-fea8c09b42f9/volumes" Nov 25 07:46:50 crc kubenswrapper[4482]: I1125 07:46:50.830691 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:46:50 crc kubenswrapper[4482]: E1125 07:46:50.831108 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.297940 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cjg9c"] Nov 25 07:46:55 crc kubenswrapper[4482]: E1125 07:46:55.298707 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049f855e-de90-41aa-99fc-fea8c09b42f9" containerName="extract-content" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.298719 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="049f855e-de90-41aa-99fc-fea8c09b42f9" containerName="extract-content" Nov 25 07:46:55 crc kubenswrapper[4482]: E1125 07:46:55.298748 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049f855e-de90-41aa-99fc-fea8c09b42f9" containerName="registry-server" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.298754 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="049f855e-de90-41aa-99fc-fea8c09b42f9" containerName="registry-server" Nov 25 07:46:55 crc kubenswrapper[4482]: E1125 07:46:55.298772 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049f855e-de90-41aa-99fc-fea8c09b42f9" containerName="extract-utilities" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.298777 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="049f855e-de90-41aa-99fc-fea8c09b42f9" containerName="extract-utilities" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.298972 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="049f855e-de90-41aa-99fc-fea8c09b42f9" containerName="registry-server" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.300245 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.304759 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjg9c"] Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.428391 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwfql\" (UniqueName: \"kubernetes.io/projected/fc210f23-c632-4e92-b4c3-2e5b516a77f4-kube-api-access-jwfql\") pod \"certified-operators-cjg9c\" (UID: \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\") " pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.428554 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc210f23-c632-4e92-b4c3-2e5b516a77f4-utilities\") pod \"certified-operators-cjg9c\" (UID: \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\") " pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.428603 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc210f23-c632-4e92-b4c3-2e5b516a77f4-catalog-content\") pod \"certified-operators-cjg9c\" (UID: \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\") " pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.530443 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc210f23-c632-4e92-b4c3-2e5b516a77f4-utilities\") pod \"certified-operators-cjg9c\" (UID: \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\") " pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.530478 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc210f23-c632-4e92-b4c3-2e5b516a77f4-catalog-content\") pod \"certified-operators-cjg9c\" (UID: \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\") " pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.530575 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwfql\" (UniqueName: \"kubernetes.io/projected/fc210f23-c632-4e92-b4c3-2e5b516a77f4-kube-api-access-jwfql\") pod \"certified-operators-cjg9c\" (UID: \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\") " pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.530938 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc210f23-c632-4e92-b4c3-2e5b516a77f4-utilities\") pod \"certified-operators-cjg9c\" (UID: \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\") " pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.530976 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc210f23-c632-4e92-b4c3-2e5b516a77f4-catalog-content\") pod \"certified-operators-cjg9c\" (UID: \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\") " pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.546371 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwfql\" (UniqueName: \"kubernetes.io/projected/fc210f23-c632-4e92-b4c3-2e5b516a77f4-kube-api-access-jwfql\") pod \"certified-operators-cjg9c\" (UID: \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\") " pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:46:55 crc kubenswrapper[4482]: I1125 07:46:55.615254 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:46:56 crc kubenswrapper[4482]: W1125 07:46:56.008063 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc210f23_c632_4e92_b4c3_2e5b516a77f4.slice/crio-22e25d28f5d4bed23298b6d1338b1a4a801c36ce19ef4be039eef248dcd3a4f9 WatchSource:0}: Error finding container 22e25d28f5d4bed23298b6d1338b1a4a801c36ce19ef4be039eef248dcd3a4f9: Status 404 returned error can't find the container with id 22e25d28f5d4bed23298b6d1338b1a4a801c36ce19ef4be039eef248dcd3a4f9 Nov 25 07:46:56 crc kubenswrapper[4482]: I1125 07:46:56.012233 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjg9c"] Nov 25 07:46:56 crc kubenswrapper[4482]: I1125 07:46:56.038938 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjg9c" event={"ID":"fc210f23-c632-4e92-b4c3-2e5b516a77f4","Type":"ContainerStarted","Data":"22e25d28f5d4bed23298b6d1338b1a4a801c36ce19ef4be039eef248dcd3a4f9"} Nov 25 07:46:56 crc kubenswrapper[4482]: E1125 07:46:56.324713 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc210f23_c632_4e92_b4c3_2e5b516a77f4.slice/crio-conmon-3638180827f4c95da5480d22715d1a9bef8eb4487e1c450056bb376ba00060d5.scope\": RecentStats: unable to find data in memory cache]" Nov 25 07:46:57 crc kubenswrapper[4482]: I1125 07:46:57.047280 4482 generic.go:334] "Generic (PLEG): container finished" podID="fc210f23-c632-4e92-b4c3-2e5b516a77f4" containerID="3638180827f4c95da5480d22715d1a9bef8eb4487e1c450056bb376ba00060d5" exitCode=0 Nov 25 07:46:57 crc kubenswrapper[4482]: I1125 07:46:57.047333 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjg9c" event={"ID":"fc210f23-c632-4e92-b4c3-2e5b516a77f4","Type":"ContainerDied","Data":"3638180827f4c95da5480d22715d1a9bef8eb4487e1c450056bb376ba00060d5"} Nov 25 07:46:58 crc kubenswrapper[4482]: I1125 07:46:58.055860 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjg9c" event={"ID":"fc210f23-c632-4e92-b4c3-2e5b516a77f4","Type":"ContainerStarted","Data":"1e150824cd8d554ab1d2b69b10ec46fe378c5f9f6c1c4f234b7f9d81c8ddf7ca"} Nov 25 07:46:59 crc kubenswrapper[4482]: I1125 07:46:59.064080 4482 generic.go:334] "Generic (PLEG): container finished" podID="fc210f23-c632-4e92-b4c3-2e5b516a77f4" containerID="1e150824cd8d554ab1d2b69b10ec46fe378c5f9f6c1c4f234b7f9d81c8ddf7ca" exitCode=0 Nov 25 07:46:59 crc kubenswrapper[4482]: I1125 07:46:59.064118 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjg9c" event={"ID":"fc210f23-c632-4e92-b4c3-2e5b516a77f4","Type":"ContainerDied","Data":"1e150824cd8d554ab1d2b69b10ec46fe378c5f9f6c1c4f234b7f9d81c8ddf7ca"} Nov 25 07:47:00 crc kubenswrapper[4482]: I1125 07:47:00.073047 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjg9c" event={"ID":"fc210f23-c632-4e92-b4c3-2e5b516a77f4","Type":"ContainerStarted","Data":"70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f"} Nov 25 07:47:04 crc kubenswrapper[4482]: I1125 07:47:04.831103 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:47:04 crc kubenswrapper[4482]: E1125 07:47:04.831673 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:47:05 crc kubenswrapper[4482]: I1125 07:47:05.615679 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:47:05 crc kubenswrapper[4482]: I1125 07:47:05.615830 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:47:05 crc kubenswrapper[4482]: I1125 07:47:05.654080 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:47:05 crc kubenswrapper[4482]: I1125 07:47:05.673797 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cjg9c" podStartSLOduration=8.062847558 podStartE2EDuration="10.673780128s" podCreationTimestamp="2025-11-25 07:46:55 +0000 UTC" firstStartedPulling="2025-11-25 07:46:57.049566802 +0000 UTC m=+3591.537798061" lastFinishedPulling="2025-11-25 07:46:59.660499372 +0000 UTC m=+3594.148730631" observedRunningTime="2025-11-25 07:47:00.092307014 +0000 UTC m=+3594.580538273" watchObservedRunningTime="2025-11-25 07:47:05.673780128 +0000 UTC m=+3600.162011387" Nov 25 07:47:06 crc kubenswrapper[4482]: I1125 07:47:06.161018 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:47:06 crc kubenswrapper[4482]: I1125 07:47:06.227488 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjg9c"] Nov 25 07:47:08 crc kubenswrapper[4482]: I1125 07:47:08.143252 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cjg9c" podUID="fc210f23-c632-4e92-b4c3-2e5b516a77f4" containerName="registry-server" containerID="cri-o://70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f" gracePeriod=2 Nov 25 07:47:08 crc kubenswrapper[4482]: I1125 07:47:08.566719 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:47:08 crc kubenswrapper[4482]: I1125 07:47:08.695126 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc210f23-c632-4e92-b4c3-2e5b516a77f4-catalog-content\") pod \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\" (UID: \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\") " Nov 25 07:47:08 crc kubenswrapper[4482]: I1125 07:47:08.695185 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc210f23-c632-4e92-b4c3-2e5b516a77f4-utilities\") pod \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\" (UID: \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\") " Nov 25 07:47:08 crc kubenswrapper[4482]: I1125 07:47:08.695439 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwfql\" (UniqueName: \"kubernetes.io/projected/fc210f23-c632-4e92-b4c3-2e5b516a77f4-kube-api-access-jwfql\") pod \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\" (UID: \"fc210f23-c632-4e92-b4c3-2e5b516a77f4\") " Nov 25 07:47:08 crc kubenswrapper[4482]: I1125 07:47:08.696330 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc210f23-c632-4e92-b4c3-2e5b516a77f4-utilities" (OuterVolumeSpecName: "utilities") pod "fc210f23-c632-4e92-b4c3-2e5b516a77f4" (UID: "fc210f23-c632-4e92-b4c3-2e5b516a77f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:47:08 crc kubenswrapper[4482]: I1125 07:47:08.697066 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc210f23-c632-4e92-b4c3-2e5b516a77f4-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:47:08 crc kubenswrapper[4482]: I1125 07:47:08.701287 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc210f23-c632-4e92-b4c3-2e5b516a77f4-kube-api-access-jwfql" (OuterVolumeSpecName: "kube-api-access-jwfql") pod "fc210f23-c632-4e92-b4c3-2e5b516a77f4" (UID: "fc210f23-c632-4e92-b4c3-2e5b516a77f4"). InnerVolumeSpecName "kube-api-access-jwfql". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:47:08 crc kubenswrapper[4482]: I1125 07:47:08.727223 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc210f23-c632-4e92-b4c3-2e5b516a77f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc210f23-c632-4e92-b4c3-2e5b516a77f4" (UID: "fc210f23-c632-4e92-b4c3-2e5b516a77f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:47:08 crc kubenswrapper[4482]: I1125 07:47:08.798626 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwfql\" (UniqueName: \"kubernetes.io/projected/fc210f23-c632-4e92-b4c3-2e5b516a77f4-kube-api-access-jwfql\") on node \"crc\" DevicePath \"\"" Nov 25 07:47:08 crc kubenswrapper[4482]: I1125 07:47:08.798660 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc210f23-c632-4e92-b4c3-2e5b516a77f4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.153140 4482 generic.go:334] "Generic (PLEG): container finished" podID="fc210f23-c632-4e92-b4c3-2e5b516a77f4" containerID="70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f" exitCode=0 Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.153200 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjg9c" event={"ID":"fc210f23-c632-4e92-b4c3-2e5b516a77f4","Type":"ContainerDied","Data":"70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f"} Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.153250 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjg9c" event={"ID":"fc210f23-c632-4e92-b4c3-2e5b516a77f4","Type":"ContainerDied","Data":"22e25d28f5d4bed23298b6d1338b1a4a801c36ce19ef4be039eef248dcd3a4f9"} Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.153269 4482 scope.go:117] "RemoveContainer" containerID="70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f" Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.153804 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjg9c" Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.171728 4482 scope.go:117] "RemoveContainer" containerID="1e150824cd8d554ab1d2b69b10ec46fe378c5f9f6c1c4f234b7f9d81c8ddf7ca" Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.188149 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjg9c"] Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.201918 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cjg9c"] Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.208122 4482 scope.go:117] "RemoveContainer" containerID="3638180827f4c95da5480d22715d1a9bef8eb4487e1c450056bb376ba00060d5" Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.249199 4482 scope.go:117] "RemoveContainer" containerID="70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f" Nov 25 07:47:09 crc kubenswrapper[4482]: E1125 07:47:09.253271 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f\": container with ID starting with 70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f not found: ID does not exist" containerID="70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f" Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.253370 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f"} err="failed to get container status \"70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f\": rpc error: code = NotFound desc = could not find container \"70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f\": container with ID starting with 70cdcc7009e829ea2d05becdcedf962ecb172496aa8b4b11c317509e27b7056f not found: ID does not exist" Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.253462 4482 scope.go:117] "RemoveContainer" containerID="1e150824cd8d554ab1d2b69b10ec46fe378c5f9f6c1c4f234b7f9d81c8ddf7ca" Nov 25 07:47:09 crc kubenswrapper[4482]: E1125 07:47:09.254619 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e150824cd8d554ab1d2b69b10ec46fe378c5f9f6c1c4f234b7f9d81c8ddf7ca\": container with ID starting with 1e150824cd8d554ab1d2b69b10ec46fe378c5f9f6c1c4f234b7f9d81c8ddf7ca not found: ID does not exist" containerID="1e150824cd8d554ab1d2b69b10ec46fe378c5f9f6c1c4f234b7f9d81c8ddf7ca" Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.254724 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e150824cd8d554ab1d2b69b10ec46fe378c5f9f6c1c4f234b7f9d81c8ddf7ca"} err="failed to get container status \"1e150824cd8d554ab1d2b69b10ec46fe378c5f9f6c1c4f234b7f9d81c8ddf7ca\": rpc error: code = NotFound desc = could not find container \"1e150824cd8d554ab1d2b69b10ec46fe378c5f9f6c1c4f234b7f9d81c8ddf7ca\": container with ID starting with 1e150824cd8d554ab1d2b69b10ec46fe378c5f9f6c1c4f234b7f9d81c8ddf7ca not found: ID does not exist" Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.254788 4482 scope.go:117] "RemoveContainer" containerID="3638180827f4c95da5480d22715d1a9bef8eb4487e1c450056bb376ba00060d5" Nov 25 07:47:09 crc kubenswrapper[4482]: E1125 07:47:09.259231 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3638180827f4c95da5480d22715d1a9bef8eb4487e1c450056bb376ba00060d5\": container with ID starting with 3638180827f4c95da5480d22715d1a9bef8eb4487e1c450056bb376ba00060d5 not found: ID does not exist" containerID="3638180827f4c95da5480d22715d1a9bef8eb4487e1c450056bb376ba00060d5" Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.259337 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3638180827f4c95da5480d22715d1a9bef8eb4487e1c450056bb376ba00060d5"} err="failed to get container status \"3638180827f4c95da5480d22715d1a9bef8eb4487e1c450056bb376ba00060d5\": rpc error: code = NotFound desc = could not find container \"3638180827f4c95da5480d22715d1a9bef8eb4487e1c450056bb376ba00060d5\": container with ID starting with 3638180827f4c95da5480d22715d1a9bef8eb4487e1c450056bb376ba00060d5 not found: ID does not exist" Nov 25 07:47:09 crc kubenswrapper[4482]: I1125 07:47:09.840556 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc210f23-c632-4e92-b4c3-2e5b516a77f4" path="/var/lib/kubelet/pods/fc210f23-c632-4e92-b4c3-2e5b516a77f4/volumes" Nov 25 07:47:19 crc kubenswrapper[4482]: I1125 07:47:19.831345 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:47:19 crc kubenswrapper[4482]: E1125 07:47:19.832021 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:47:30 crc kubenswrapper[4482]: I1125 07:47:30.831881 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:47:30 crc kubenswrapper[4482]: E1125 07:47:30.832684 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:47:44 crc kubenswrapper[4482]: I1125 07:47:44.830858 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:47:44 crc kubenswrapper[4482]: E1125 07:47:44.831431 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:47:57 crc kubenswrapper[4482]: I1125 07:47:57.830574 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:47:57 crc kubenswrapper[4482]: E1125 07:47:57.831377 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:48:10 crc kubenswrapper[4482]: I1125 07:48:10.830762 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:48:10 crc kubenswrapper[4482]: E1125 07:48:10.833107 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:48:24 crc kubenswrapper[4482]: I1125 07:48:24.831770 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:48:24 crc kubenswrapper[4482]: E1125 07:48:24.832321 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:48:36 crc kubenswrapper[4482]: I1125 07:48:36.830858 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:48:36 crc kubenswrapper[4482]: E1125 07:48:36.831406 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:48:50 crc kubenswrapper[4482]: I1125 07:48:50.830357 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:48:50 crc kubenswrapper[4482]: E1125 07:48:50.830939 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:49:01 crc kubenswrapper[4482]: I1125 07:49:01.831387 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:49:01 crc kubenswrapper[4482]: E1125 07:49:01.832043 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:49:14 crc kubenswrapper[4482]: I1125 07:49:14.830876 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:49:14 crc kubenswrapper[4482]: E1125 07:49:14.831794 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:49:26 crc kubenswrapper[4482]: I1125 07:49:26.831129 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:49:26 crc kubenswrapper[4482]: E1125 07:49:26.832998 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:49:40 crc kubenswrapper[4482]: I1125 07:49:40.831834 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:49:40 crc kubenswrapper[4482]: E1125 07:49:40.832854 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:49:53 crc kubenswrapper[4482]: I1125 07:49:53.831700 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:49:53 crc kubenswrapper[4482]: E1125 07:49:53.832482 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:50:06 crc kubenswrapper[4482]: I1125 07:50:06.831456 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:50:06 crc kubenswrapper[4482]: E1125 07:50:06.833984 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:50:20 crc kubenswrapper[4482]: I1125 07:50:20.831089 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:50:20 crc kubenswrapper[4482]: E1125 07:50:20.831965 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:50:33 crc kubenswrapper[4482]: I1125 07:50:33.831325 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:50:33 crc kubenswrapper[4482]: E1125 07:50:33.832600 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:50:46 crc kubenswrapper[4482]: I1125 07:50:46.831220 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:50:46 crc kubenswrapper[4482]: E1125 07:50:46.832207 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:50:59 crc kubenswrapper[4482]: I1125 07:50:59.831849 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:50:59 crc kubenswrapper[4482]: E1125 07:50:59.832599 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:51:14 crc kubenswrapper[4482]: I1125 07:51:14.830516 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:51:14 crc kubenswrapper[4482]: E1125 07:51:14.831134 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:51:26 crc kubenswrapper[4482]: I1125 07:51:26.831092 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:51:26 crc kubenswrapper[4482]: E1125 07:51:26.831607 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:51:38 crc kubenswrapper[4482]: I1125 07:51:38.830944 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:51:38 crc kubenswrapper[4482]: E1125 07:51:38.832282 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:51:50 crc kubenswrapper[4482]: I1125 07:51:50.830443 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:51:51 crc kubenswrapper[4482]: I1125 07:51:51.369082 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"11eb2cb23f6adedeffdaa50c183b54a466ab6684b521a51657d0398e5a86a518"} Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.189052 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b6f7n"] Nov 25 07:52:30 crc kubenswrapper[4482]: E1125 07:52:30.190376 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc210f23-c632-4e92-b4c3-2e5b516a77f4" containerName="registry-server" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.190390 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc210f23-c632-4e92-b4c3-2e5b516a77f4" containerName="registry-server" Nov 25 07:52:30 crc kubenswrapper[4482]: E1125 07:52:30.190406 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc210f23-c632-4e92-b4c3-2e5b516a77f4" containerName="extract-utilities" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.190412 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc210f23-c632-4e92-b4c3-2e5b516a77f4" containerName="extract-utilities" Nov 25 07:52:30 crc kubenswrapper[4482]: E1125 07:52:30.190428 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc210f23-c632-4e92-b4c3-2e5b516a77f4" containerName="extract-content" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.190436 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc210f23-c632-4e92-b4c3-2e5b516a77f4" containerName="extract-content" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.190600 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc210f23-c632-4e92-b4c3-2e5b516a77f4" containerName="registry-server" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.193804 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.200464 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/634c02bb-bd2a-41e9-be37-b77c80d878cc-catalog-content\") pod \"redhat-operators-b6f7n\" (UID: \"634c02bb-bd2a-41e9-be37-b77c80d878cc\") " pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.200541 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/634c02bb-bd2a-41e9-be37-b77c80d878cc-utilities\") pod \"redhat-operators-b6f7n\" (UID: \"634c02bb-bd2a-41e9-be37-b77c80d878cc\") " pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.200576 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6rnc\" (UniqueName: \"kubernetes.io/projected/634c02bb-bd2a-41e9-be37-b77c80d878cc-kube-api-access-x6rnc\") pod \"redhat-operators-b6f7n\" (UID: \"634c02bb-bd2a-41e9-be37-b77c80d878cc\") " pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.212415 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b6f7n"] Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.302491 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/634c02bb-bd2a-41e9-be37-b77c80d878cc-catalog-content\") pod \"redhat-operators-b6f7n\" (UID: \"634c02bb-bd2a-41e9-be37-b77c80d878cc\") " pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.302564 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/634c02bb-bd2a-41e9-be37-b77c80d878cc-utilities\") pod \"redhat-operators-b6f7n\" (UID: \"634c02bb-bd2a-41e9-be37-b77c80d878cc\") " pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.302611 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6rnc\" (UniqueName: \"kubernetes.io/projected/634c02bb-bd2a-41e9-be37-b77c80d878cc-kube-api-access-x6rnc\") pod \"redhat-operators-b6f7n\" (UID: \"634c02bb-bd2a-41e9-be37-b77c80d878cc\") " pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.304207 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/634c02bb-bd2a-41e9-be37-b77c80d878cc-catalog-content\") pod \"redhat-operators-b6f7n\" (UID: \"634c02bb-bd2a-41e9-be37-b77c80d878cc\") " pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.304678 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/634c02bb-bd2a-41e9-be37-b77c80d878cc-utilities\") pod \"redhat-operators-b6f7n\" (UID: \"634c02bb-bd2a-41e9-be37-b77c80d878cc\") " pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.320405 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6rnc\" (UniqueName: \"kubernetes.io/projected/634c02bb-bd2a-41e9-be37-b77c80d878cc-kube-api-access-x6rnc\") pod \"redhat-operators-b6f7n\" (UID: \"634c02bb-bd2a-41e9-be37-b77c80d878cc\") " pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.518125 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:30 crc kubenswrapper[4482]: I1125 07:52:30.927431 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b6f7n"] Nov 25 07:52:31 crc kubenswrapper[4482]: I1125 07:52:31.666799 4482 generic.go:334] "Generic (PLEG): container finished" podID="634c02bb-bd2a-41e9-be37-b77c80d878cc" containerID="2b19aec3ee8ded7d702e3db7d08787dc8aa703e803e8f135ed018c29a967eb18" exitCode=0 Nov 25 07:52:31 crc kubenswrapper[4482]: I1125 07:52:31.666873 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6f7n" event={"ID":"634c02bb-bd2a-41e9-be37-b77c80d878cc","Type":"ContainerDied","Data":"2b19aec3ee8ded7d702e3db7d08787dc8aa703e803e8f135ed018c29a967eb18"} Nov 25 07:52:31 crc kubenswrapper[4482]: I1125 07:52:31.666918 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6f7n" event={"ID":"634c02bb-bd2a-41e9-be37-b77c80d878cc","Type":"ContainerStarted","Data":"7dcf2c997e078548d4f583ec89bb17b2f5f91d422abc5a7144f87d1863abdd9f"} Nov 25 07:52:31 crc kubenswrapper[4482]: I1125 07:52:31.670213 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:52:33 crc kubenswrapper[4482]: I1125 07:52:33.683079 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6f7n" event={"ID":"634c02bb-bd2a-41e9-be37-b77c80d878cc","Type":"ContainerStarted","Data":"7b86ae3a4fb7a289398ca6f9b513669d209da2fc7efa4f0f9668465f5c8f8f17"} Nov 25 07:52:35 crc kubenswrapper[4482]: I1125 07:52:35.698839 4482 generic.go:334] "Generic (PLEG): container finished" podID="634c02bb-bd2a-41e9-be37-b77c80d878cc" containerID="7b86ae3a4fb7a289398ca6f9b513669d209da2fc7efa4f0f9668465f5c8f8f17" exitCode=0 Nov 25 07:52:35 crc kubenswrapper[4482]: I1125 07:52:35.698890 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6f7n" event={"ID":"634c02bb-bd2a-41e9-be37-b77c80d878cc","Type":"ContainerDied","Data":"7b86ae3a4fb7a289398ca6f9b513669d209da2fc7efa4f0f9668465f5c8f8f17"} Nov 25 07:52:36 crc kubenswrapper[4482]: I1125 07:52:36.709856 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6f7n" event={"ID":"634c02bb-bd2a-41e9-be37-b77c80d878cc","Type":"ContainerStarted","Data":"bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f"} Nov 25 07:52:36 crc kubenswrapper[4482]: I1125 07:52:36.728306 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b6f7n" podStartSLOduration=2.100470159 podStartE2EDuration="6.727339066s" podCreationTimestamp="2025-11-25 07:52:30 +0000 UTC" firstStartedPulling="2025-11-25 07:52:31.669936216 +0000 UTC m=+3926.158167476" lastFinishedPulling="2025-11-25 07:52:36.296805125 +0000 UTC m=+3930.785036383" observedRunningTime="2025-11-25 07:52:36.722738946 +0000 UTC m=+3931.210970204" watchObservedRunningTime="2025-11-25 07:52:36.727339066 +0000 UTC m=+3931.215570325" Nov 25 07:52:40 crc kubenswrapper[4482]: I1125 07:52:40.519259 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:40 crc kubenswrapper[4482]: I1125 07:52:40.519613 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:41 crc kubenswrapper[4482]: I1125 07:52:41.743293 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b6f7n" podUID="634c02bb-bd2a-41e9-be37-b77c80d878cc" containerName="registry-server" probeResult="failure" output=< Nov 25 07:52:41 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 07:52:41 crc kubenswrapper[4482]: > Nov 25 07:52:50 crc kubenswrapper[4482]: I1125 07:52:50.561135 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:50 crc kubenswrapper[4482]: I1125 07:52:50.607608 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:50 crc kubenswrapper[4482]: I1125 07:52:50.805817 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b6f7n"] Nov 25 07:52:51 crc kubenswrapper[4482]: I1125 07:52:51.839082 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b6f7n" podUID="634c02bb-bd2a-41e9-be37-b77c80d878cc" containerName="registry-server" containerID="cri-o://bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f" gracePeriod=2 Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.570930 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.640272 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/634c02bb-bd2a-41e9-be37-b77c80d878cc-utilities\") pod \"634c02bb-bd2a-41e9-be37-b77c80d878cc\" (UID: \"634c02bb-bd2a-41e9-be37-b77c80d878cc\") " Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.640835 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/634c02bb-bd2a-41e9-be37-b77c80d878cc-catalog-content\") pod \"634c02bb-bd2a-41e9-be37-b77c80d878cc\" (UID: \"634c02bb-bd2a-41e9-be37-b77c80d878cc\") " Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.640956 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6rnc\" (UniqueName: \"kubernetes.io/projected/634c02bb-bd2a-41e9-be37-b77c80d878cc-kube-api-access-x6rnc\") pod \"634c02bb-bd2a-41e9-be37-b77c80d878cc\" (UID: \"634c02bb-bd2a-41e9-be37-b77c80d878cc\") " Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.640827 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/634c02bb-bd2a-41e9-be37-b77c80d878cc-utilities" (OuterVolumeSpecName: "utilities") pod "634c02bb-bd2a-41e9-be37-b77c80d878cc" (UID: "634c02bb-bd2a-41e9-be37-b77c80d878cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.641515 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/634c02bb-bd2a-41e9-be37-b77c80d878cc-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.647571 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/634c02bb-bd2a-41e9-be37-b77c80d878cc-kube-api-access-x6rnc" (OuterVolumeSpecName: "kube-api-access-x6rnc") pod "634c02bb-bd2a-41e9-be37-b77c80d878cc" (UID: "634c02bb-bd2a-41e9-be37-b77c80d878cc"). InnerVolumeSpecName "kube-api-access-x6rnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.710634 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/634c02bb-bd2a-41e9-be37-b77c80d878cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "634c02bb-bd2a-41e9-be37-b77c80d878cc" (UID: "634c02bb-bd2a-41e9-be37-b77c80d878cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.742939 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/634c02bb-bd2a-41e9-be37-b77c80d878cc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.742967 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6rnc\" (UniqueName: \"kubernetes.io/projected/634c02bb-bd2a-41e9-be37-b77c80d878cc-kube-api-access-x6rnc\") on node \"crc\" DevicePath \"\"" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.847784 4482 generic.go:334] "Generic (PLEG): container finished" podID="634c02bb-bd2a-41e9-be37-b77c80d878cc" containerID="bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f" exitCode=0 Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.847820 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6f7n" event={"ID":"634c02bb-bd2a-41e9-be37-b77c80d878cc","Type":"ContainerDied","Data":"bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f"} Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.847832 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6f7n" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.847845 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6f7n" event={"ID":"634c02bb-bd2a-41e9-be37-b77c80d878cc","Type":"ContainerDied","Data":"7dcf2c997e078548d4f583ec89bb17b2f5f91d422abc5a7144f87d1863abdd9f"} Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.847865 4482 scope.go:117] "RemoveContainer" containerID="bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.870756 4482 scope.go:117] "RemoveContainer" containerID="7b86ae3a4fb7a289398ca6f9b513669d209da2fc7efa4f0f9668465f5c8f8f17" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.872403 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b6f7n"] Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.878773 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b6f7n"] Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.912124 4482 scope.go:117] "RemoveContainer" containerID="2b19aec3ee8ded7d702e3db7d08787dc8aa703e803e8f135ed018c29a967eb18" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.926515 4482 scope.go:117] "RemoveContainer" containerID="bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f" Nov 25 07:52:52 crc kubenswrapper[4482]: E1125 07:52:52.926852 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f\": container with ID starting with bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f not found: ID does not exist" containerID="bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.927291 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f"} err="failed to get container status \"bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f\": rpc error: code = NotFound desc = could not find container \"bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f\": container with ID starting with bb91d04cfcbb71809fa87401b64bdd8cf834bfe8d916752a0911fd23d69ef76f not found: ID does not exist" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.927326 4482 scope.go:117] "RemoveContainer" containerID="7b86ae3a4fb7a289398ca6f9b513669d209da2fc7efa4f0f9668465f5c8f8f17" Nov 25 07:52:52 crc kubenswrapper[4482]: E1125 07:52:52.927676 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b86ae3a4fb7a289398ca6f9b513669d209da2fc7efa4f0f9668465f5c8f8f17\": container with ID starting with 7b86ae3a4fb7a289398ca6f9b513669d209da2fc7efa4f0f9668465f5c8f8f17 not found: ID does not exist" containerID="7b86ae3a4fb7a289398ca6f9b513669d209da2fc7efa4f0f9668465f5c8f8f17" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.927761 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b86ae3a4fb7a289398ca6f9b513669d209da2fc7efa4f0f9668465f5c8f8f17"} err="failed to get container status \"7b86ae3a4fb7a289398ca6f9b513669d209da2fc7efa4f0f9668465f5c8f8f17\": rpc error: code = NotFound desc = could not find container \"7b86ae3a4fb7a289398ca6f9b513669d209da2fc7efa4f0f9668465f5c8f8f17\": container with ID starting with 7b86ae3a4fb7a289398ca6f9b513669d209da2fc7efa4f0f9668465f5c8f8f17 not found: ID does not exist" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.927835 4482 scope.go:117] "RemoveContainer" containerID="2b19aec3ee8ded7d702e3db7d08787dc8aa703e803e8f135ed018c29a967eb18" Nov 25 07:52:52 crc kubenswrapper[4482]: E1125 07:52:52.928148 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b19aec3ee8ded7d702e3db7d08787dc8aa703e803e8f135ed018c29a967eb18\": container with ID starting with 2b19aec3ee8ded7d702e3db7d08787dc8aa703e803e8f135ed018c29a967eb18 not found: ID does not exist" containerID="2b19aec3ee8ded7d702e3db7d08787dc8aa703e803e8f135ed018c29a967eb18" Nov 25 07:52:52 crc kubenswrapper[4482]: I1125 07:52:52.928227 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b19aec3ee8ded7d702e3db7d08787dc8aa703e803e8f135ed018c29a967eb18"} err="failed to get container status \"2b19aec3ee8ded7d702e3db7d08787dc8aa703e803e8f135ed018c29a967eb18\": rpc error: code = NotFound desc = could not find container \"2b19aec3ee8ded7d702e3db7d08787dc8aa703e803e8f135ed018c29a967eb18\": container with ID starting with 2b19aec3ee8ded7d702e3db7d08787dc8aa703e803e8f135ed018c29a967eb18 not found: ID does not exist" Nov 25 07:52:53 crc kubenswrapper[4482]: I1125 07:52:53.838890 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="634c02bb-bd2a-41e9-be37-b77c80d878cc" path="/var/lib/kubelet/pods/634c02bb-bd2a-41e9-be37-b77c80d878cc/volumes" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.340254 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xcrn8"] Nov 25 07:53:56 crc kubenswrapper[4482]: E1125 07:53:56.340963 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634c02bb-bd2a-41e9-be37-b77c80d878cc" containerName="extract-content" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.340976 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="634c02bb-bd2a-41e9-be37-b77c80d878cc" containerName="extract-content" Nov 25 07:53:56 crc kubenswrapper[4482]: E1125 07:53:56.340997 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634c02bb-bd2a-41e9-be37-b77c80d878cc" containerName="extract-utilities" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.341004 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="634c02bb-bd2a-41e9-be37-b77c80d878cc" containerName="extract-utilities" Nov 25 07:53:56 crc kubenswrapper[4482]: E1125 07:53:56.341020 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634c02bb-bd2a-41e9-be37-b77c80d878cc" containerName="registry-server" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.341027 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="634c02bb-bd2a-41e9-be37-b77c80d878cc" containerName="registry-server" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.341409 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="634c02bb-bd2a-41e9-be37-b77c80d878cc" containerName="registry-server" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.342660 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.356949 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-catalog-content\") pod \"redhat-marketplace-xcrn8\" (UID: \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\") " pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.357222 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz27c\" (UniqueName: \"kubernetes.io/projected/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-kube-api-access-sz27c\") pod \"redhat-marketplace-xcrn8\" (UID: \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\") " pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.357326 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-utilities\") pod \"redhat-marketplace-xcrn8\" (UID: \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\") " pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.367209 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xcrn8"] Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.458275 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz27c\" (UniqueName: \"kubernetes.io/projected/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-kube-api-access-sz27c\") pod \"redhat-marketplace-xcrn8\" (UID: \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\") " pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.458432 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-utilities\") pod \"redhat-marketplace-xcrn8\" (UID: \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\") " pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.458521 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-catalog-content\") pod \"redhat-marketplace-xcrn8\" (UID: \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\") " pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.458956 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-catalog-content\") pod \"redhat-marketplace-xcrn8\" (UID: \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\") " pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.459193 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-utilities\") pod \"redhat-marketplace-xcrn8\" (UID: \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\") " pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.497140 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz27c\" (UniqueName: \"kubernetes.io/projected/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-kube-api-access-sz27c\") pod \"redhat-marketplace-xcrn8\" (UID: \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\") " pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:53:56 crc kubenswrapper[4482]: I1125 07:53:56.660932 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:53:57 crc kubenswrapper[4482]: I1125 07:53:57.106463 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xcrn8"] Nov 25 07:53:57 crc kubenswrapper[4482]: W1125 07:53:57.112269 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8f9ce8c_24d3_4fe5_8538_ead560bdd838.slice/crio-362c4fb53074835f3b09abc648ccc84b5cacf9a9b6f51abfbdccd766d893e657 WatchSource:0}: Error finding container 362c4fb53074835f3b09abc648ccc84b5cacf9a9b6f51abfbdccd766d893e657: Status 404 returned error can't find the container with id 362c4fb53074835f3b09abc648ccc84b5cacf9a9b6f51abfbdccd766d893e657 Nov 25 07:53:57 crc kubenswrapper[4482]: I1125 07:53:57.344933 4482 generic.go:334] "Generic (PLEG): container finished" podID="c8f9ce8c-24d3-4fe5-8538-ead560bdd838" containerID="d352cfebe8169146f907c5b591d4355cca90d3e1514b95c4a8e2c1a948865145" exitCode=0 Nov 25 07:53:57 crc kubenswrapper[4482]: I1125 07:53:57.345320 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xcrn8" event={"ID":"c8f9ce8c-24d3-4fe5-8538-ead560bdd838","Type":"ContainerDied","Data":"d352cfebe8169146f907c5b591d4355cca90d3e1514b95c4a8e2c1a948865145"} Nov 25 07:53:57 crc kubenswrapper[4482]: I1125 07:53:57.345351 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xcrn8" event={"ID":"c8f9ce8c-24d3-4fe5-8538-ead560bdd838","Type":"ContainerStarted","Data":"362c4fb53074835f3b09abc648ccc84b5cacf9a9b6f51abfbdccd766d893e657"} Nov 25 07:53:58 crc kubenswrapper[4482]: I1125 07:53:58.354659 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xcrn8" event={"ID":"c8f9ce8c-24d3-4fe5-8538-ead560bdd838","Type":"ContainerStarted","Data":"a70afb912df5618c1ad970341e56ec7e7e6b169eba8f3fde8e591a16ce63d038"} Nov 25 07:53:59 crc kubenswrapper[4482]: I1125 07:53:59.362733 4482 generic.go:334] "Generic (PLEG): container finished" podID="c8f9ce8c-24d3-4fe5-8538-ead560bdd838" containerID="a70afb912df5618c1ad970341e56ec7e7e6b169eba8f3fde8e591a16ce63d038" exitCode=0 Nov 25 07:53:59 crc kubenswrapper[4482]: I1125 07:53:59.362819 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xcrn8" event={"ID":"c8f9ce8c-24d3-4fe5-8538-ead560bdd838","Type":"ContainerDied","Data":"a70afb912df5618c1ad970341e56ec7e7e6b169eba8f3fde8e591a16ce63d038"} Nov 25 07:54:00 crc kubenswrapper[4482]: I1125 07:54:00.372930 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xcrn8" event={"ID":"c8f9ce8c-24d3-4fe5-8538-ead560bdd838","Type":"ContainerStarted","Data":"5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f"} Nov 25 07:54:00 crc kubenswrapper[4482]: I1125 07:54:00.388525 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xcrn8" podStartSLOduration=1.8964547330000001 podStartE2EDuration="4.388509459s" podCreationTimestamp="2025-11-25 07:53:56 +0000 UTC" firstStartedPulling="2025-11-25 07:53:57.347541549 +0000 UTC m=+4011.835772808" lastFinishedPulling="2025-11-25 07:53:59.839596276 +0000 UTC m=+4014.327827534" observedRunningTime="2025-11-25 07:54:00.387097488 +0000 UTC m=+4014.875328747" watchObservedRunningTime="2025-11-25 07:54:00.388509459 +0000 UTC m=+4014.876740718" Nov 25 07:54:06 crc kubenswrapper[4482]: I1125 07:54:06.661481 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:54:06 crc kubenswrapper[4482]: I1125 07:54:06.662066 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:54:06 crc kubenswrapper[4482]: I1125 07:54:06.702887 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:54:07 crc kubenswrapper[4482]: I1125 07:54:07.465743 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:54:07 crc kubenswrapper[4482]: I1125 07:54:07.501492 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xcrn8"] Nov 25 07:54:09 crc kubenswrapper[4482]: I1125 07:54:09.117560 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:54:09 crc kubenswrapper[4482]: I1125 07:54:09.117623 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:54:09 crc kubenswrapper[4482]: I1125 07:54:09.439303 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xcrn8" podUID="c8f9ce8c-24d3-4fe5-8538-ead560bdd838" containerName="registry-server" containerID="cri-o://5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f" gracePeriod=2 Nov 25 07:54:09 crc kubenswrapper[4482]: I1125 07:54:09.845251 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.008971 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-catalog-content\") pod \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\" (UID: \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\") " Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.009222 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz27c\" (UniqueName: \"kubernetes.io/projected/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-kube-api-access-sz27c\") pod \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\" (UID: \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\") " Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.009261 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-utilities\") pod \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\" (UID: \"c8f9ce8c-24d3-4fe5-8538-ead560bdd838\") " Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.009933 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-utilities" (OuterVolumeSpecName: "utilities") pod "c8f9ce8c-24d3-4fe5-8538-ead560bdd838" (UID: "c8f9ce8c-24d3-4fe5-8538-ead560bdd838"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.014515 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-kube-api-access-sz27c" (OuterVolumeSpecName: "kube-api-access-sz27c") pod "c8f9ce8c-24d3-4fe5-8538-ead560bdd838" (UID: "c8f9ce8c-24d3-4fe5-8538-ead560bdd838"). InnerVolumeSpecName "kube-api-access-sz27c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.023074 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8f9ce8c-24d3-4fe5-8538-ead560bdd838" (UID: "c8f9ce8c-24d3-4fe5-8538-ead560bdd838"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.111373 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz27c\" (UniqueName: \"kubernetes.io/projected/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-kube-api-access-sz27c\") on node \"crc\" DevicePath \"\"" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.111401 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.111413 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f9ce8c-24d3-4fe5-8538-ead560bdd838-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.456950 4482 generic.go:334] "Generic (PLEG): container finished" podID="c8f9ce8c-24d3-4fe5-8538-ead560bdd838" containerID="5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f" exitCode=0 Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.457026 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xcrn8" event={"ID":"c8f9ce8c-24d3-4fe5-8538-ead560bdd838","Type":"ContainerDied","Data":"5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f"} Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.457088 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xcrn8" event={"ID":"c8f9ce8c-24d3-4fe5-8538-ead560bdd838","Type":"ContainerDied","Data":"362c4fb53074835f3b09abc648ccc84b5cacf9a9b6f51abfbdccd766d893e657"} Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.457111 4482 scope.go:117] "RemoveContainer" containerID="5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.457351 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xcrn8" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.500107 4482 scope.go:117] "RemoveContainer" containerID="a70afb912df5618c1ad970341e56ec7e7e6b169eba8f3fde8e591a16ce63d038" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.500254 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xcrn8"] Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.508365 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xcrn8"] Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.915720 4482 scope.go:117] "RemoveContainer" containerID="d352cfebe8169146f907c5b591d4355cca90d3e1514b95c4a8e2c1a948865145" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.950699 4482 scope.go:117] "RemoveContainer" containerID="5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f" Nov 25 07:54:10 crc kubenswrapper[4482]: E1125 07:54:10.951258 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f\": container with ID starting with 5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f not found: ID does not exist" containerID="5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.951312 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f"} err="failed to get container status \"5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f\": rpc error: code = NotFound desc = could not find container \"5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f\": container with ID starting with 5130ee1ef012da203472f56330aaa0d7f5e46ad255b8945138121bb831eb705f not found: ID does not exist" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.951350 4482 scope.go:117] "RemoveContainer" containerID="a70afb912df5618c1ad970341e56ec7e7e6b169eba8f3fde8e591a16ce63d038" Nov 25 07:54:10 crc kubenswrapper[4482]: E1125 07:54:10.951643 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a70afb912df5618c1ad970341e56ec7e7e6b169eba8f3fde8e591a16ce63d038\": container with ID starting with a70afb912df5618c1ad970341e56ec7e7e6b169eba8f3fde8e591a16ce63d038 not found: ID does not exist" containerID="a70afb912df5618c1ad970341e56ec7e7e6b169eba8f3fde8e591a16ce63d038" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.951680 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a70afb912df5618c1ad970341e56ec7e7e6b169eba8f3fde8e591a16ce63d038"} err="failed to get container status \"a70afb912df5618c1ad970341e56ec7e7e6b169eba8f3fde8e591a16ce63d038\": rpc error: code = NotFound desc = could not find container \"a70afb912df5618c1ad970341e56ec7e7e6b169eba8f3fde8e591a16ce63d038\": container with ID starting with a70afb912df5618c1ad970341e56ec7e7e6b169eba8f3fde8e591a16ce63d038 not found: ID does not exist" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.951704 4482 scope.go:117] "RemoveContainer" containerID="d352cfebe8169146f907c5b591d4355cca90d3e1514b95c4a8e2c1a948865145" Nov 25 07:54:10 crc kubenswrapper[4482]: E1125 07:54:10.951908 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d352cfebe8169146f907c5b591d4355cca90d3e1514b95c4a8e2c1a948865145\": container with ID starting with d352cfebe8169146f907c5b591d4355cca90d3e1514b95c4a8e2c1a948865145 not found: ID does not exist" containerID="d352cfebe8169146f907c5b591d4355cca90d3e1514b95c4a8e2c1a948865145" Nov 25 07:54:10 crc kubenswrapper[4482]: I1125 07:54:10.951932 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d352cfebe8169146f907c5b591d4355cca90d3e1514b95c4a8e2c1a948865145"} err="failed to get container status \"d352cfebe8169146f907c5b591d4355cca90d3e1514b95c4a8e2c1a948865145\": rpc error: code = NotFound desc = could not find container \"d352cfebe8169146f907c5b591d4355cca90d3e1514b95c4a8e2c1a948865145\": container with ID starting with d352cfebe8169146f907c5b591d4355cca90d3e1514b95c4a8e2c1a948865145 not found: ID does not exist" Nov 25 07:54:11 crc kubenswrapper[4482]: I1125 07:54:11.840870 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f9ce8c-24d3-4fe5-8538-ead560bdd838" path="/var/lib/kubelet/pods/c8f9ce8c-24d3-4fe5-8538-ead560bdd838/volumes" Nov 25 07:54:39 crc kubenswrapper[4482]: I1125 07:54:39.117490 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:54:39 crc kubenswrapper[4482]: I1125 07:54:39.117837 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:55:09 crc kubenswrapper[4482]: I1125 07:55:09.118299 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:55:09 crc kubenswrapper[4482]: I1125 07:55:09.118695 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:55:09 crc kubenswrapper[4482]: I1125 07:55:09.118735 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 07:55:09 crc kubenswrapper[4482]: I1125 07:55:09.119454 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11eb2cb23f6adedeffdaa50c183b54a466ab6684b521a51657d0398e5a86a518"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:55:09 crc kubenswrapper[4482]: I1125 07:55:09.119511 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://11eb2cb23f6adedeffdaa50c183b54a466ab6684b521a51657d0398e5a86a518" gracePeriod=600 Nov 25 07:55:09 crc kubenswrapper[4482]: I1125 07:55:09.861616 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="11eb2cb23f6adedeffdaa50c183b54a466ab6684b521a51657d0398e5a86a518" exitCode=0 Nov 25 07:55:09 crc kubenswrapper[4482]: I1125 07:55:09.861691 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"11eb2cb23f6adedeffdaa50c183b54a466ab6684b521a51657d0398e5a86a518"} Nov 25 07:55:09 crc kubenswrapper[4482]: I1125 07:55:09.862210 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78"} Nov 25 07:55:09 crc kubenswrapper[4482]: I1125 07:55:09.862242 4482 scope.go:117] "RemoveContainer" containerID="8495a16e508b2a175edefa2bd0ee15cabcedf0aac28695239120e469fa05c87b" Nov 25 07:57:09 crc kubenswrapper[4482]: I1125 07:57:09.117826 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:57:09 crc kubenswrapper[4482]: I1125 07:57:09.118270 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:57:21 crc kubenswrapper[4482]: I1125 07:57:21.991650 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vcsxk"] Nov 25 07:57:21 crc kubenswrapper[4482]: E1125 07:57:21.992335 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f9ce8c-24d3-4fe5-8538-ead560bdd838" containerName="registry-server" Nov 25 07:57:21 crc kubenswrapper[4482]: I1125 07:57:21.992349 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f9ce8c-24d3-4fe5-8538-ead560bdd838" containerName="registry-server" Nov 25 07:57:21 crc kubenswrapper[4482]: E1125 07:57:21.992359 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f9ce8c-24d3-4fe5-8538-ead560bdd838" containerName="extract-utilities" Nov 25 07:57:21 crc kubenswrapper[4482]: I1125 07:57:21.992364 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f9ce8c-24d3-4fe5-8538-ead560bdd838" containerName="extract-utilities" Nov 25 07:57:21 crc kubenswrapper[4482]: E1125 07:57:21.992389 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f9ce8c-24d3-4fe5-8538-ead560bdd838" containerName="extract-content" Nov 25 07:57:21 crc kubenswrapper[4482]: I1125 07:57:21.992395 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f9ce8c-24d3-4fe5-8538-ead560bdd838" containerName="extract-content" Nov 25 07:57:21 crc kubenswrapper[4482]: I1125 07:57:21.992545 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f9ce8c-24d3-4fe5-8538-ead560bdd838" containerName="registry-server" Nov 25 07:57:21 crc kubenswrapper[4482]: I1125 07:57:21.993656 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.010233 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcsxk"] Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.157764 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk8qk\" (UniqueName: \"kubernetes.io/projected/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-kube-api-access-wk8qk\") pod \"certified-operators-vcsxk\" (UID: \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\") " pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.157825 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-catalog-content\") pod \"certified-operators-vcsxk\" (UID: \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\") " pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.158017 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-utilities\") pod \"certified-operators-vcsxk\" (UID: \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\") " pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.259655 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk8qk\" (UniqueName: \"kubernetes.io/projected/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-kube-api-access-wk8qk\") pod \"certified-operators-vcsxk\" (UID: \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\") " pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.259713 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-catalog-content\") pod \"certified-operators-vcsxk\" (UID: \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\") " pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.259766 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-utilities\") pod \"certified-operators-vcsxk\" (UID: \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\") " pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.260148 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-catalog-content\") pod \"certified-operators-vcsxk\" (UID: \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\") " pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.260238 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-utilities\") pod \"certified-operators-vcsxk\" (UID: \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\") " pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.275360 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk8qk\" (UniqueName: \"kubernetes.io/projected/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-kube-api-access-wk8qk\") pod \"certified-operators-vcsxk\" (UID: \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\") " pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.307325 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.700555 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcsxk"] Nov 25 07:57:22 crc kubenswrapper[4482]: I1125 07:57:22.869884 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcsxk" event={"ID":"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b","Type":"ContainerStarted","Data":"d95941c299e6b0ed356457ad93dc7e91c1d0b66b7b925e48aa732e92892262de"} Nov 25 07:57:23 crc kubenswrapper[4482]: I1125 07:57:23.877048 4482 generic.go:334] "Generic (PLEG): container finished" podID="5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" containerID="d888cc6e4f3e5a103b1ee5d4519a1f7e953669078b60f475123405d5a4a2c7fd" exitCode=0 Nov 25 07:57:23 crc kubenswrapper[4482]: I1125 07:57:23.877101 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcsxk" event={"ID":"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b","Type":"ContainerDied","Data":"d888cc6e4f3e5a103b1ee5d4519a1f7e953669078b60f475123405d5a4a2c7fd"} Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.385716 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9v95x"] Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.387474 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.396148 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9v95x"] Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.494882 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5q8m\" (UniqueName: \"kubernetes.io/projected/5641aee1-8992-446f-b4c9-6756b34867af-kube-api-access-h5q8m\") pod \"community-operators-9v95x\" (UID: \"5641aee1-8992-446f-b4c9-6756b34867af\") " pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.495038 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5641aee1-8992-446f-b4c9-6756b34867af-catalog-content\") pod \"community-operators-9v95x\" (UID: \"5641aee1-8992-446f-b4c9-6756b34867af\") " pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.495126 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5641aee1-8992-446f-b4c9-6756b34867af-utilities\") pod \"community-operators-9v95x\" (UID: \"5641aee1-8992-446f-b4c9-6756b34867af\") " pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.597388 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5q8m\" (UniqueName: \"kubernetes.io/projected/5641aee1-8992-446f-b4c9-6756b34867af-kube-api-access-h5q8m\") pod \"community-operators-9v95x\" (UID: \"5641aee1-8992-446f-b4c9-6756b34867af\") " pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.597443 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5641aee1-8992-446f-b4c9-6756b34867af-catalog-content\") pod \"community-operators-9v95x\" (UID: \"5641aee1-8992-446f-b4c9-6756b34867af\") " pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.597488 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5641aee1-8992-446f-b4c9-6756b34867af-utilities\") pod \"community-operators-9v95x\" (UID: \"5641aee1-8992-446f-b4c9-6756b34867af\") " pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.597862 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5641aee1-8992-446f-b4c9-6756b34867af-catalog-content\") pod \"community-operators-9v95x\" (UID: \"5641aee1-8992-446f-b4c9-6756b34867af\") " pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.598479 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5641aee1-8992-446f-b4c9-6756b34867af-utilities\") pod \"community-operators-9v95x\" (UID: \"5641aee1-8992-446f-b4c9-6756b34867af\") " pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.618946 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5q8m\" (UniqueName: \"kubernetes.io/projected/5641aee1-8992-446f-b4c9-6756b34867af-kube-api-access-h5q8m\") pod \"community-operators-9v95x\" (UID: \"5641aee1-8992-446f-b4c9-6756b34867af\") " pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.702570 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:24 crc kubenswrapper[4482]: I1125 07:57:24.905694 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcsxk" event={"ID":"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b","Type":"ContainerStarted","Data":"241932fbeee3287f562fc838fc69a0bfac1c4940492d5a1e065a27f81ff337fe"} Nov 25 07:57:25 crc kubenswrapper[4482]: I1125 07:57:25.184998 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9v95x"] Nov 25 07:57:25 crc kubenswrapper[4482]: I1125 07:57:25.912979 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9v95x" event={"ID":"5641aee1-8992-446f-b4c9-6756b34867af","Type":"ContainerStarted","Data":"8081f85f02007d7b17bae2eee5b5a789f7215b62c3a5c8b47cde72f2c7a8eed3"} Nov 25 07:57:25 crc kubenswrapper[4482]: I1125 07:57:25.913874 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9v95x" event={"ID":"5641aee1-8992-446f-b4c9-6756b34867af","Type":"ContainerStarted","Data":"30d3c748c08f204443429b52cb3edac179c0f699837df032aa9b8e774a6060df"} Nov 25 07:57:25 crc kubenswrapper[4482]: I1125 07:57:25.915751 4482 generic.go:334] "Generic (PLEG): container finished" podID="5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" containerID="241932fbeee3287f562fc838fc69a0bfac1c4940492d5a1e065a27f81ff337fe" exitCode=0 Nov 25 07:57:25 crc kubenswrapper[4482]: I1125 07:57:25.915786 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcsxk" event={"ID":"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b","Type":"ContainerDied","Data":"241932fbeee3287f562fc838fc69a0bfac1c4940492d5a1e065a27f81ff337fe"} Nov 25 07:57:26 crc kubenswrapper[4482]: I1125 07:57:26.923229 4482 generic.go:334] "Generic (PLEG): container finished" podID="5641aee1-8992-446f-b4c9-6756b34867af" containerID="8081f85f02007d7b17bae2eee5b5a789f7215b62c3a5c8b47cde72f2c7a8eed3" exitCode=0 Nov 25 07:57:26 crc kubenswrapper[4482]: I1125 07:57:26.923298 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9v95x" event={"ID":"5641aee1-8992-446f-b4c9-6756b34867af","Type":"ContainerDied","Data":"8081f85f02007d7b17bae2eee5b5a789f7215b62c3a5c8b47cde72f2c7a8eed3"} Nov 25 07:57:26 crc kubenswrapper[4482]: I1125 07:57:26.925551 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcsxk" event={"ID":"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b","Type":"ContainerStarted","Data":"c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2"} Nov 25 07:57:26 crc kubenswrapper[4482]: I1125 07:57:26.958822 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vcsxk" podStartSLOduration=3.454528368 podStartE2EDuration="5.958810018s" podCreationTimestamp="2025-11-25 07:57:21 +0000 UTC" firstStartedPulling="2025-11-25 07:57:23.879117852 +0000 UTC m=+4218.367349112" lastFinishedPulling="2025-11-25 07:57:26.383399503 +0000 UTC m=+4220.871630762" observedRunningTime="2025-11-25 07:57:26.954436856 +0000 UTC m=+4221.442668115" watchObservedRunningTime="2025-11-25 07:57:26.958810018 +0000 UTC m=+4221.447041277" Nov 25 07:57:27 crc kubenswrapper[4482]: I1125 07:57:27.935647 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9v95x" event={"ID":"5641aee1-8992-446f-b4c9-6756b34867af","Type":"ContainerStarted","Data":"7e4b532ed758650a180720c5d3c85267e65cf6979ede2c7a4a6311dc2c1bf2cb"} Nov 25 07:57:28 crc kubenswrapper[4482]: I1125 07:57:28.943613 4482 generic.go:334] "Generic (PLEG): container finished" podID="5641aee1-8992-446f-b4c9-6756b34867af" containerID="7e4b532ed758650a180720c5d3c85267e65cf6979ede2c7a4a6311dc2c1bf2cb" exitCode=0 Nov 25 07:57:28 crc kubenswrapper[4482]: I1125 07:57:28.943671 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9v95x" event={"ID":"5641aee1-8992-446f-b4c9-6756b34867af","Type":"ContainerDied","Data":"7e4b532ed758650a180720c5d3c85267e65cf6979ede2c7a4a6311dc2c1bf2cb"} Nov 25 07:57:29 crc kubenswrapper[4482]: I1125 07:57:29.953000 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9v95x" event={"ID":"5641aee1-8992-446f-b4c9-6756b34867af","Type":"ContainerStarted","Data":"88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812"} Nov 25 07:57:32 crc kubenswrapper[4482]: I1125 07:57:32.307687 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:32 crc kubenswrapper[4482]: I1125 07:57:32.308292 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:32 crc kubenswrapper[4482]: I1125 07:57:32.347483 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:32 crc kubenswrapper[4482]: I1125 07:57:32.363384 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9v95x" podStartSLOduration=5.872034487 podStartE2EDuration="8.363368655s" podCreationTimestamp="2025-11-25 07:57:24 +0000 UTC" firstStartedPulling="2025-11-25 07:57:26.925062794 +0000 UTC m=+4221.413294053" lastFinishedPulling="2025-11-25 07:57:29.416396973 +0000 UTC m=+4223.904628221" observedRunningTime="2025-11-25 07:57:29.998578498 +0000 UTC m=+4224.486809757" watchObservedRunningTime="2025-11-25 07:57:32.363368655 +0000 UTC m=+4226.851599914" Nov 25 07:57:33 crc kubenswrapper[4482]: I1125 07:57:33.017003 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:33 crc kubenswrapper[4482]: I1125 07:57:33.986082 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcsxk"] Nov 25 07:57:34 crc kubenswrapper[4482]: I1125 07:57:34.703556 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:34 crc kubenswrapper[4482]: I1125 07:57:34.703632 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:34 crc kubenswrapper[4482]: I1125 07:57:34.932156 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:34 crc kubenswrapper[4482]: I1125 07:57:34.989091 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vcsxk" podUID="5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" containerName="registry-server" containerID="cri-o://c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2" gracePeriod=2 Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.031953 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.406367 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.518876 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-catalog-content\") pod \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\" (UID: \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\") " Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.518970 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-utilities\") pod \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\" (UID: \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\") " Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.519227 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk8qk\" (UniqueName: \"kubernetes.io/projected/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-kube-api-access-wk8qk\") pod \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\" (UID: \"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b\") " Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.519639 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-utilities" (OuterVolumeSpecName: "utilities") pod "5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" (UID: "5207c5ed-5ffd-410b-a8df-cf3781fc9c6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.551456 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" (UID: "5207c5ed-5ffd-410b-a8df-cf3781fc9c6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.599493 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-kube-api-access-wk8qk" (OuterVolumeSpecName: "kube-api-access-wk8qk") pod "5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" (UID: "5207c5ed-5ffd-410b-a8df-cf3781fc9c6b"). InnerVolumeSpecName "kube-api-access-wk8qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.622493 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.622520 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.622531 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk8qk\" (UniqueName: \"kubernetes.io/projected/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b-kube-api-access-wk8qk\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.997703 4482 generic.go:334] "Generic (PLEG): container finished" podID="5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" containerID="c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2" exitCode=0 Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.997888 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcsxk" Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.997994 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcsxk" event={"ID":"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b","Type":"ContainerDied","Data":"c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2"} Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.998035 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcsxk" event={"ID":"5207c5ed-5ffd-410b-a8df-cf3781fc9c6b","Type":"ContainerDied","Data":"d95941c299e6b0ed356457ad93dc7e91c1d0b66b7b925e48aa732e92892262de"} Nov 25 07:57:35 crc kubenswrapper[4482]: I1125 07:57:35.998056 4482 scope.go:117] "RemoveContainer" containerID="c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2" Nov 25 07:57:36 crc kubenswrapper[4482]: I1125 07:57:36.018686 4482 scope.go:117] "RemoveContainer" containerID="241932fbeee3287f562fc838fc69a0bfac1c4940492d5a1e065a27f81ff337fe" Nov 25 07:57:36 crc kubenswrapper[4482]: I1125 07:57:36.023690 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcsxk"] Nov 25 07:57:36 crc kubenswrapper[4482]: I1125 07:57:36.030535 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vcsxk"] Nov 25 07:57:36 crc kubenswrapper[4482]: I1125 07:57:36.035306 4482 scope.go:117] "RemoveContainer" containerID="d888cc6e4f3e5a103b1ee5d4519a1f7e953669078b60f475123405d5a4a2c7fd" Nov 25 07:57:36 crc kubenswrapper[4482]: I1125 07:57:36.069650 4482 scope.go:117] "RemoveContainer" containerID="c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2" Nov 25 07:57:36 crc kubenswrapper[4482]: E1125 07:57:36.070106 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2\": container with ID starting with c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2 not found: ID does not exist" containerID="c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2" Nov 25 07:57:36 crc kubenswrapper[4482]: I1125 07:57:36.070225 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2"} err="failed to get container status \"c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2\": rpc error: code = NotFound desc = could not find container \"c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2\": container with ID starting with c0e8483a49ccda75f4b33338d6afb03594acef37aeae8c4df127c11020b1bad2 not found: ID does not exist" Nov 25 07:57:36 crc kubenswrapper[4482]: I1125 07:57:36.070299 4482 scope.go:117] "RemoveContainer" containerID="241932fbeee3287f562fc838fc69a0bfac1c4940492d5a1e065a27f81ff337fe" Nov 25 07:57:36 crc kubenswrapper[4482]: E1125 07:57:36.072217 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"241932fbeee3287f562fc838fc69a0bfac1c4940492d5a1e065a27f81ff337fe\": container with ID starting with 241932fbeee3287f562fc838fc69a0bfac1c4940492d5a1e065a27f81ff337fe not found: ID does not exist" containerID="241932fbeee3287f562fc838fc69a0bfac1c4940492d5a1e065a27f81ff337fe" Nov 25 07:57:36 crc kubenswrapper[4482]: I1125 07:57:36.072266 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"241932fbeee3287f562fc838fc69a0bfac1c4940492d5a1e065a27f81ff337fe"} err="failed to get container status \"241932fbeee3287f562fc838fc69a0bfac1c4940492d5a1e065a27f81ff337fe\": rpc error: code = NotFound desc = could not find container \"241932fbeee3287f562fc838fc69a0bfac1c4940492d5a1e065a27f81ff337fe\": container with ID starting with 241932fbeee3287f562fc838fc69a0bfac1c4940492d5a1e065a27f81ff337fe not found: ID does not exist" Nov 25 07:57:36 crc kubenswrapper[4482]: I1125 07:57:36.072297 4482 scope.go:117] "RemoveContainer" containerID="d888cc6e4f3e5a103b1ee5d4519a1f7e953669078b60f475123405d5a4a2c7fd" Nov 25 07:57:36 crc kubenswrapper[4482]: E1125 07:57:36.072610 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d888cc6e4f3e5a103b1ee5d4519a1f7e953669078b60f475123405d5a4a2c7fd\": container with ID starting with d888cc6e4f3e5a103b1ee5d4519a1f7e953669078b60f475123405d5a4a2c7fd not found: ID does not exist" containerID="d888cc6e4f3e5a103b1ee5d4519a1f7e953669078b60f475123405d5a4a2c7fd" Nov 25 07:57:36 crc kubenswrapper[4482]: I1125 07:57:36.072632 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d888cc6e4f3e5a103b1ee5d4519a1f7e953669078b60f475123405d5a4a2c7fd"} err="failed to get container status \"d888cc6e4f3e5a103b1ee5d4519a1f7e953669078b60f475123405d5a4a2c7fd\": rpc error: code = NotFound desc = could not find container \"d888cc6e4f3e5a103b1ee5d4519a1f7e953669078b60f475123405d5a4a2c7fd\": container with ID starting with d888cc6e4f3e5a103b1ee5d4519a1f7e953669078b60f475123405d5a4a2c7fd not found: ID does not exist" Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.183004 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9v95x"] Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.183597 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9v95x" podUID="5641aee1-8992-446f-b4c9-6756b34867af" containerName="registry-server" containerID="cri-o://88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812" gracePeriod=2 Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.554154 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.665271 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5641aee1-8992-446f-b4c9-6756b34867af-catalog-content\") pod \"5641aee1-8992-446f-b4c9-6756b34867af\" (UID: \"5641aee1-8992-446f-b4c9-6756b34867af\") " Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.665870 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5q8m\" (UniqueName: \"kubernetes.io/projected/5641aee1-8992-446f-b4c9-6756b34867af-kube-api-access-h5q8m\") pod \"5641aee1-8992-446f-b4c9-6756b34867af\" (UID: \"5641aee1-8992-446f-b4c9-6756b34867af\") " Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.665912 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5641aee1-8992-446f-b4c9-6756b34867af-utilities\") pod \"5641aee1-8992-446f-b4c9-6756b34867af\" (UID: \"5641aee1-8992-446f-b4c9-6756b34867af\") " Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.667287 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5641aee1-8992-446f-b4c9-6756b34867af-utilities" (OuterVolumeSpecName: "utilities") pod "5641aee1-8992-446f-b4c9-6756b34867af" (UID: "5641aee1-8992-446f-b4c9-6756b34867af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.672306 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5641aee1-8992-446f-b4c9-6756b34867af-kube-api-access-h5q8m" (OuterVolumeSpecName: "kube-api-access-h5q8m") pod "5641aee1-8992-446f-b4c9-6756b34867af" (UID: "5641aee1-8992-446f-b4c9-6756b34867af"). InnerVolumeSpecName "kube-api-access-h5q8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.710926 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5641aee1-8992-446f-b4c9-6756b34867af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5641aee1-8992-446f-b4c9-6756b34867af" (UID: "5641aee1-8992-446f-b4c9-6756b34867af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.768514 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5q8m\" (UniqueName: \"kubernetes.io/projected/5641aee1-8992-446f-b4c9-6756b34867af-kube-api-access-h5q8m\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.768545 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5641aee1-8992-446f-b4c9-6756b34867af-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.768556 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5641aee1-8992-446f-b4c9-6756b34867af-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:37 crc kubenswrapper[4482]: I1125 07:57:37.844680 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" path="/var/lib/kubelet/pods/5207c5ed-5ffd-410b-a8df-cf3781fc9c6b/volumes" Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.020562 4482 generic.go:334] "Generic (PLEG): container finished" podID="5641aee1-8992-446f-b4c9-6756b34867af" containerID="88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812" exitCode=0 Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.020608 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9v95x" event={"ID":"5641aee1-8992-446f-b4c9-6756b34867af","Type":"ContainerDied","Data":"88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812"} Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.020642 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9v95x" event={"ID":"5641aee1-8992-446f-b4c9-6756b34867af","Type":"ContainerDied","Data":"30d3c748c08f204443429b52cb3edac179c0f699837df032aa9b8e774a6060df"} Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.020665 4482 scope.go:117] "RemoveContainer" containerID="88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812" Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.020667 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9v95x" Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.042272 4482 scope.go:117] "RemoveContainer" containerID="7e4b532ed758650a180720c5d3c85267e65cf6979ede2c7a4a6311dc2c1bf2cb" Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.044996 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9v95x"] Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.056156 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9v95x"] Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.063114 4482 scope.go:117] "RemoveContainer" containerID="8081f85f02007d7b17bae2eee5b5a789f7215b62c3a5c8b47cde72f2c7a8eed3" Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.100522 4482 scope.go:117] "RemoveContainer" containerID="88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812" Nov 25 07:57:38 crc kubenswrapper[4482]: E1125 07:57:38.100880 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812\": container with ID starting with 88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812 not found: ID does not exist" containerID="88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812" Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.100913 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812"} err="failed to get container status \"88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812\": rpc error: code = NotFound desc = could not find container \"88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812\": container with ID starting with 88b15959f8b81daddeff5563d72e288347c70374a4e004de1fe8893dd4299812 not found: ID does not exist" Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.100936 4482 scope.go:117] "RemoveContainer" containerID="7e4b532ed758650a180720c5d3c85267e65cf6979ede2c7a4a6311dc2c1bf2cb" Nov 25 07:57:38 crc kubenswrapper[4482]: E1125 07:57:38.101352 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e4b532ed758650a180720c5d3c85267e65cf6979ede2c7a4a6311dc2c1bf2cb\": container with ID starting with 7e4b532ed758650a180720c5d3c85267e65cf6979ede2c7a4a6311dc2c1bf2cb not found: ID does not exist" containerID="7e4b532ed758650a180720c5d3c85267e65cf6979ede2c7a4a6311dc2c1bf2cb" Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.101380 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e4b532ed758650a180720c5d3c85267e65cf6979ede2c7a4a6311dc2c1bf2cb"} err="failed to get container status \"7e4b532ed758650a180720c5d3c85267e65cf6979ede2c7a4a6311dc2c1bf2cb\": rpc error: code = NotFound desc = could not find container \"7e4b532ed758650a180720c5d3c85267e65cf6979ede2c7a4a6311dc2c1bf2cb\": container with ID starting with 7e4b532ed758650a180720c5d3c85267e65cf6979ede2c7a4a6311dc2c1bf2cb not found: ID does not exist" Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.101397 4482 scope.go:117] "RemoveContainer" containerID="8081f85f02007d7b17bae2eee5b5a789f7215b62c3a5c8b47cde72f2c7a8eed3" Nov 25 07:57:38 crc kubenswrapper[4482]: E1125 07:57:38.101744 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8081f85f02007d7b17bae2eee5b5a789f7215b62c3a5c8b47cde72f2c7a8eed3\": container with ID starting with 8081f85f02007d7b17bae2eee5b5a789f7215b62c3a5c8b47cde72f2c7a8eed3 not found: ID does not exist" containerID="8081f85f02007d7b17bae2eee5b5a789f7215b62c3a5c8b47cde72f2c7a8eed3" Nov 25 07:57:38 crc kubenswrapper[4482]: I1125 07:57:38.101784 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8081f85f02007d7b17bae2eee5b5a789f7215b62c3a5c8b47cde72f2c7a8eed3"} err="failed to get container status \"8081f85f02007d7b17bae2eee5b5a789f7215b62c3a5c8b47cde72f2c7a8eed3\": rpc error: code = NotFound desc = could not find container \"8081f85f02007d7b17bae2eee5b5a789f7215b62c3a5c8b47cde72f2c7a8eed3\": container with ID starting with 8081f85f02007d7b17bae2eee5b5a789f7215b62c3a5c8b47cde72f2c7a8eed3 not found: ID does not exist" Nov 25 07:57:39 crc kubenswrapper[4482]: I1125 07:57:39.118218 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:57:39 crc kubenswrapper[4482]: I1125 07:57:39.118277 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:57:39 crc kubenswrapper[4482]: I1125 07:57:39.840712 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5641aee1-8992-446f-b4c9-6756b34867af" path="/var/lib/kubelet/pods/5641aee1-8992-446f-b4c9-6756b34867af/volumes" Nov 25 07:58:09 crc kubenswrapper[4482]: I1125 07:58:09.117618 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:58:09 crc kubenswrapper[4482]: I1125 07:58:09.118015 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:58:09 crc kubenswrapper[4482]: I1125 07:58:09.118053 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 07:58:09 crc kubenswrapper[4482]: I1125 07:58:09.118576 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:58:09 crc kubenswrapper[4482]: I1125 07:58:09.118627 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" gracePeriod=600 Nov 25 07:58:09 crc kubenswrapper[4482]: E1125 07:58:09.232505 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:58:10 crc kubenswrapper[4482]: I1125 07:58:10.227225 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" exitCode=0 Nov 25 07:58:10 crc kubenswrapper[4482]: I1125 07:58:10.227264 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78"} Nov 25 07:58:10 crc kubenswrapper[4482]: I1125 07:58:10.227476 4482 scope.go:117] "RemoveContainer" containerID="11eb2cb23f6adedeffdaa50c183b54a466ab6684b521a51657d0398e5a86a518" Nov 25 07:58:10 crc kubenswrapper[4482]: I1125 07:58:10.227897 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 07:58:10 crc kubenswrapper[4482]: E1125 07:58:10.228104 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:58:23 crc kubenswrapper[4482]: I1125 07:58:23.830722 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 07:58:23 crc kubenswrapper[4482]: E1125 07:58:23.831340 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:58:38 crc kubenswrapper[4482]: I1125 07:58:38.831957 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 07:58:38 crc kubenswrapper[4482]: E1125 07:58:38.833191 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:58:49 crc kubenswrapper[4482]: I1125 07:58:49.831464 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 07:58:49 crc kubenswrapper[4482]: E1125 07:58:49.832984 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:59:02 crc kubenswrapper[4482]: I1125 07:59:02.831101 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 07:59:02 crc kubenswrapper[4482]: E1125 07:59:02.831963 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:59:15 crc kubenswrapper[4482]: I1125 07:59:15.835395 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 07:59:15 crc kubenswrapper[4482]: E1125 07:59:15.837065 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:59:29 crc kubenswrapper[4482]: I1125 07:59:29.832248 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 07:59:29 crc kubenswrapper[4482]: E1125 07:59:29.832931 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:59:40 crc kubenswrapper[4482]: I1125 07:59:40.830797 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 07:59:40 crc kubenswrapper[4482]: E1125 07:59:40.831692 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 07:59:54 crc kubenswrapper[4482]: I1125 07:59:54.831425 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 07:59:54 crc kubenswrapper[4482]: E1125 07:59:54.832148 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.161023 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7"] Nov 25 08:00:00 crc kubenswrapper[4482]: E1125 08:00:00.162038 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5641aee1-8992-446f-b4c9-6756b34867af" containerName="extract-utilities" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.162052 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="5641aee1-8992-446f-b4c9-6756b34867af" containerName="extract-utilities" Nov 25 08:00:00 crc kubenswrapper[4482]: E1125 08:00:00.162077 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" containerName="extract-content" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.162083 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" containerName="extract-content" Nov 25 08:00:00 crc kubenswrapper[4482]: E1125 08:00:00.162094 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" containerName="extract-utilities" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.162099 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" containerName="extract-utilities" Nov 25 08:00:00 crc kubenswrapper[4482]: E1125 08:00:00.162120 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5641aee1-8992-446f-b4c9-6756b34867af" containerName="registry-server" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.162125 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="5641aee1-8992-446f-b4c9-6756b34867af" containerName="registry-server" Nov 25 08:00:00 crc kubenswrapper[4482]: E1125 08:00:00.162137 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" containerName="registry-server" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.162142 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" containerName="registry-server" Nov 25 08:00:00 crc kubenswrapper[4482]: E1125 08:00:00.162155 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5641aee1-8992-446f-b4c9-6756b34867af" containerName="extract-content" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.162159 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="5641aee1-8992-446f-b4c9-6756b34867af" containerName="extract-content" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.162367 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="5207c5ed-5ffd-410b-a8df-cf3781fc9c6b" containerName="registry-server" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.162394 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="5641aee1-8992-446f-b4c9-6756b34867af" containerName="registry-server" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.162936 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.177214 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7"] Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.177365 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.177578 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.307773 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7109a3-bb73-4533-bb5b-c7e52179326d-config-volume\") pod \"collect-profiles-29400960-g7xr7\" (UID: \"3f7109a3-bb73-4533-bb5b-c7e52179326d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.307862 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7109a3-bb73-4533-bb5b-c7e52179326d-secret-volume\") pod \"collect-profiles-29400960-g7xr7\" (UID: \"3f7109a3-bb73-4533-bb5b-c7e52179326d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.307921 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg6pf\" (UniqueName: \"kubernetes.io/projected/3f7109a3-bb73-4533-bb5b-c7e52179326d-kube-api-access-zg6pf\") pod \"collect-profiles-29400960-g7xr7\" (UID: \"3f7109a3-bb73-4533-bb5b-c7e52179326d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.409677 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7109a3-bb73-4533-bb5b-c7e52179326d-config-volume\") pod \"collect-profiles-29400960-g7xr7\" (UID: \"3f7109a3-bb73-4533-bb5b-c7e52179326d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.409751 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7109a3-bb73-4533-bb5b-c7e52179326d-secret-volume\") pod \"collect-profiles-29400960-g7xr7\" (UID: \"3f7109a3-bb73-4533-bb5b-c7e52179326d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.409828 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg6pf\" (UniqueName: \"kubernetes.io/projected/3f7109a3-bb73-4533-bb5b-c7e52179326d-kube-api-access-zg6pf\") pod \"collect-profiles-29400960-g7xr7\" (UID: \"3f7109a3-bb73-4533-bb5b-c7e52179326d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.410586 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7109a3-bb73-4533-bb5b-c7e52179326d-config-volume\") pod \"collect-profiles-29400960-g7xr7\" (UID: \"3f7109a3-bb73-4533-bb5b-c7e52179326d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.418785 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7109a3-bb73-4533-bb5b-c7e52179326d-secret-volume\") pod \"collect-profiles-29400960-g7xr7\" (UID: \"3f7109a3-bb73-4533-bb5b-c7e52179326d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.424627 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg6pf\" (UniqueName: \"kubernetes.io/projected/3f7109a3-bb73-4533-bb5b-c7e52179326d-kube-api-access-zg6pf\") pod \"collect-profiles-29400960-g7xr7\" (UID: \"3f7109a3-bb73-4533-bb5b-c7e52179326d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.478087 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:00 crc kubenswrapper[4482]: I1125 08:00:00.877220 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7"] Nov 25 08:00:01 crc kubenswrapper[4482]: I1125 08:00:01.039268 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" event={"ID":"3f7109a3-bb73-4533-bb5b-c7e52179326d","Type":"ContainerStarted","Data":"3c8248dae98b89986c8c8c30988aba4f455b7c60d263af6bb75e71389bfdda25"} Nov 25 08:00:01 crc kubenswrapper[4482]: I1125 08:00:01.039601 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" event={"ID":"3f7109a3-bb73-4533-bb5b-c7e52179326d","Type":"ContainerStarted","Data":"acb24620d4616c533d356f47a7d57e8b8e67783c7bf99b08f4ac3af49b9a6e1a"} Nov 25 08:00:01 crc kubenswrapper[4482]: I1125 08:00:01.054242 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" podStartSLOduration=1.054222501 podStartE2EDuration="1.054222501s" podCreationTimestamp="2025-11-25 08:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:00:01.049410081 +0000 UTC m=+4375.537641340" watchObservedRunningTime="2025-11-25 08:00:01.054222501 +0000 UTC m=+4375.542453760" Nov 25 08:00:02 crc kubenswrapper[4482]: I1125 08:00:02.048069 4482 generic.go:334] "Generic (PLEG): container finished" podID="3f7109a3-bb73-4533-bb5b-c7e52179326d" containerID="3c8248dae98b89986c8c8c30988aba4f455b7c60d263af6bb75e71389bfdda25" exitCode=0 Nov 25 08:00:02 crc kubenswrapper[4482]: I1125 08:00:02.048104 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" event={"ID":"3f7109a3-bb73-4533-bb5b-c7e52179326d","Type":"ContainerDied","Data":"3c8248dae98b89986c8c8c30988aba4f455b7c60d263af6bb75e71389bfdda25"} Nov 25 08:00:03 crc kubenswrapper[4482]: I1125 08:00:03.420211 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:03 crc kubenswrapper[4482]: I1125 08:00:03.565815 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7109a3-bb73-4533-bb5b-c7e52179326d-config-volume\") pod \"3f7109a3-bb73-4533-bb5b-c7e52179326d\" (UID: \"3f7109a3-bb73-4533-bb5b-c7e52179326d\") " Nov 25 08:00:03 crc kubenswrapper[4482]: I1125 08:00:03.565868 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7109a3-bb73-4533-bb5b-c7e52179326d-secret-volume\") pod \"3f7109a3-bb73-4533-bb5b-c7e52179326d\" (UID: \"3f7109a3-bb73-4533-bb5b-c7e52179326d\") " Nov 25 08:00:03 crc kubenswrapper[4482]: I1125 08:00:03.565899 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg6pf\" (UniqueName: \"kubernetes.io/projected/3f7109a3-bb73-4533-bb5b-c7e52179326d-kube-api-access-zg6pf\") pod \"3f7109a3-bb73-4533-bb5b-c7e52179326d\" (UID: \"3f7109a3-bb73-4533-bb5b-c7e52179326d\") " Nov 25 08:00:03 crc kubenswrapper[4482]: I1125 08:00:03.566654 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7109a3-bb73-4533-bb5b-c7e52179326d-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f7109a3-bb73-4533-bb5b-c7e52179326d" (UID: "3f7109a3-bb73-4533-bb5b-c7e52179326d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:00:03 crc kubenswrapper[4482]: I1125 08:00:03.571062 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7109a3-bb73-4533-bb5b-c7e52179326d-kube-api-access-zg6pf" (OuterVolumeSpecName: "kube-api-access-zg6pf") pod "3f7109a3-bb73-4533-bb5b-c7e52179326d" (UID: "3f7109a3-bb73-4533-bb5b-c7e52179326d"). InnerVolumeSpecName "kube-api-access-zg6pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:00:03 crc kubenswrapper[4482]: I1125 08:00:03.571196 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f7109a3-bb73-4533-bb5b-c7e52179326d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f7109a3-bb73-4533-bb5b-c7e52179326d" (UID: "3f7109a3-bb73-4533-bb5b-c7e52179326d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:00:03 crc kubenswrapper[4482]: I1125 08:00:03.667955 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg6pf\" (UniqueName: \"kubernetes.io/projected/3f7109a3-bb73-4533-bb5b-c7e52179326d-kube-api-access-zg6pf\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:03 crc kubenswrapper[4482]: I1125 08:00:03.667985 4482 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7109a3-bb73-4533-bb5b-c7e52179326d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:03 crc kubenswrapper[4482]: I1125 08:00:03.667994 4482 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7109a3-bb73-4533-bb5b-c7e52179326d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:04 crc kubenswrapper[4482]: I1125 08:00:04.061376 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" event={"ID":"3f7109a3-bb73-4533-bb5b-c7e52179326d","Type":"ContainerDied","Data":"acb24620d4616c533d356f47a7d57e8b8e67783c7bf99b08f4ac3af49b9a6e1a"} Nov 25 08:00:04 crc kubenswrapper[4482]: I1125 08:00:04.061612 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acb24620d4616c533d356f47a7d57e8b8e67783c7bf99b08f4ac3af49b9a6e1a" Nov 25 08:00:04 crc kubenswrapper[4482]: I1125 08:00:04.061416 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7" Nov 25 08:00:04 crc kubenswrapper[4482]: I1125 08:00:04.476734 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26"] Nov 25 08:00:04 crc kubenswrapper[4482]: I1125 08:00:04.482599 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400915-htw26"] Nov 25 08:00:05 crc kubenswrapper[4482]: I1125 08:00:05.841313 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c30f6098-f136-489a-a90a-e8e76cae8fcb" path="/var/lib/kubelet/pods/c30f6098-f136-489a-a90a-e8e76cae8fcb/volumes" Nov 25 08:00:07 crc kubenswrapper[4482]: I1125 08:00:07.830310 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:00:07 crc kubenswrapper[4482]: E1125 08:00:07.830672 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:00:19 crc kubenswrapper[4482]: I1125 08:00:19.831914 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:00:19 crc kubenswrapper[4482]: E1125 08:00:19.832455 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:00:30 crc kubenswrapper[4482]: I1125 08:00:30.830517 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:00:30 crc kubenswrapper[4482]: E1125 08:00:30.831043 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:00:41 crc kubenswrapper[4482]: I1125 08:00:41.061990 4482 scope.go:117] "RemoveContainer" containerID="ea67dddf7ba55afe9d550875bb8082d3c2b87c5b81287372d159ff050ab49763" Nov 25 08:00:43 crc kubenswrapper[4482]: I1125 08:00:43.834821 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:00:43 crc kubenswrapper[4482]: E1125 08:00:43.835481 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:00:54 crc kubenswrapper[4482]: I1125 08:00:54.830621 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:00:54 crc kubenswrapper[4482]: E1125 08:00:54.831202 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.150026 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29400961-hsnmw"] Nov 25 08:01:00 crc kubenswrapper[4482]: E1125 08:01:00.150955 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7109a3-bb73-4533-bb5b-c7e52179326d" containerName="collect-profiles" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.150967 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7109a3-bb73-4533-bb5b-c7e52179326d" containerName="collect-profiles" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.151323 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7109a3-bb73-4533-bb5b-c7e52179326d" containerName="collect-profiles" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.152478 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.161554 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400961-hsnmw"] Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.223598 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-combined-ca-bundle\") pod \"keystone-cron-29400961-hsnmw\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.223667 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-fernet-keys\") pod \"keystone-cron-29400961-hsnmw\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.223861 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm7kn\" (UniqueName: \"kubernetes.io/projected/95cec20e-9b81-4f9d-9954-d777edc6b842-kube-api-access-xm7kn\") pod \"keystone-cron-29400961-hsnmw\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.223997 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-config-data\") pod \"keystone-cron-29400961-hsnmw\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.325145 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-fernet-keys\") pod \"keystone-cron-29400961-hsnmw\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.325230 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm7kn\" (UniqueName: \"kubernetes.io/projected/95cec20e-9b81-4f9d-9954-d777edc6b842-kube-api-access-xm7kn\") pod \"keystone-cron-29400961-hsnmw\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.325307 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-config-data\") pod \"keystone-cron-29400961-hsnmw\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.325344 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-combined-ca-bundle\") pod \"keystone-cron-29400961-hsnmw\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.329718 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-combined-ca-bundle\") pod \"keystone-cron-29400961-hsnmw\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.331226 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-fernet-keys\") pod \"keystone-cron-29400961-hsnmw\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.331262 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-config-data\") pod \"keystone-cron-29400961-hsnmw\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.338082 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm7kn\" (UniqueName: \"kubernetes.io/projected/95cec20e-9b81-4f9d-9954-d777edc6b842-kube-api-access-xm7kn\") pod \"keystone-cron-29400961-hsnmw\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.482617 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:00 crc kubenswrapper[4482]: I1125 08:01:00.845678 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400961-hsnmw"] Nov 25 08:01:01 crc kubenswrapper[4482]: I1125 08:01:01.450912 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400961-hsnmw" event={"ID":"95cec20e-9b81-4f9d-9954-d777edc6b842","Type":"ContainerStarted","Data":"56799c63aae8172737d85596bb3aeaa3b136bc35b78d479caad56d1ca7a45417"} Nov 25 08:01:01 crc kubenswrapper[4482]: I1125 08:01:01.451120 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400961-hsnmw" event={"ID":"95cec20e-9b81-4f9d-9954-d777edc6b842","Type":"ContainerStarted","Data":"8d6e7d3d80972a494faf8030c4b8154585ddba429de06c519d1f652994a2ec3a"} Nov 25 08:01:01 crc kubenswrapper[4482]: I1125 08:01:01.464011 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29400961-hsnmw" podStartSLOduration=1.463999283 podStartE2EDuration="1.463999283s" podCreationTimestamp="2025-11-25 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:01:01.460163473 +0000 UTC m=+4435.948394733" watchObservedRunningTime="2025-11-25 08:01:01.463999283 +0000 UTC m=+4435.952230543" Nov 25 08:01:03 crc kubenswrapper[4482]: I1125 08:01:03.464444 4482 generic.go:334] "Generic (PLEG): container finished" podID="95cec20e-9b81-4f9d-9954-d777edc6b842" containerID="56799c63aae8172737d85596bb3aeaa3b136bc35b78d479caad56d1ca7a45417" exitCode=0 Nov 25 08:01:03 crc kubenswrapper[4482]: I1125 08:01:03.464600 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400961-hsnmw" event={"ID":"95cec20e-9b81-4f9d-9954-d777edc6b842","Type":"ContainerDied","Data":"56799c63aae8172737d85596bb3aeaa3b136bc35b78d479caad56d1ca7a45417"} Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.772625 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.888669 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm7kn\" (UniqueName: \"kubernetes.io/projected/95cec20e-9b81-4f9d-9954-d777edc6b842-kube-api-access-xm7kn\") pod \"95cec20e-9b81-4f9d-9954-d777edc6b842\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.888931 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-combined-ca-bundle\") pod \"95cec20e-9b81-4f9d-9954-d777edc6b842\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.888975 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-fernet-keys\") pod \"95cec20e-9b81-4f9d-9954-d777edc6b842\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.888998 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-config-data\") pod \"95cec20e-9b81-4f9d-9954-d777edc6b842\" (UID: \"95cec20e-9b81-4f9d-9954-d777edc6b842\") " Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.893068 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "95cec20e-9b81-4f9d-9954-d777edc6b842" (UID: "95cec20e-9b81-4f9d-9954-d777edc6b842"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.894150 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95cec20e-9b81-4f9d-9954-d777edc6b842-kube-api-access-xm7kn" (OuterVolumeSpecName: "kube-api-access-xm7kn") pod "95cec20e-9b81-4f9d-9954-d777edc6b842" (UID: "95cec20e-9b81-4f9d-9954-d777edc6b842"). InnerVolumeSpecName "kube-api-access-xm7kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.910091 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95cec20e-9b81-4f9d-9954-d777edc6b842" (UID: "95cec20e-9b81-4f9d-9954-d777edc6b842"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.926507 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-config-data" (OuterVolumeSpecName: "config-data") pod "95cec20e-9b81-4f9d-9954-d777edc6b842" (UID: "95cec20e-9b81-4f9d-9954-d777edc6b842"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.990607 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm7kn\" (UniqueName: \"kubernetes.io/projected/95cec20e-9b81-4f9d-9954-d777edc6b842-kube-api-access-xm7kn\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.990632 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.990641 4482 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:04 crc kubenswrapper[4482]: I1125 08:01:04.990650 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cec20e-9b81-4f9d-9954-d777edc6b842-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:05 crc kubenswrapper[4482]: I1125 08:01:05.477112 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400961-hsnmw" event={"ID":"95cec20e-9b81-4f9d-9954-d777edc6b842","Type":"ContainerDied","Data":"8d6e7d3d80972a494faf8030c4b8154585ddba429de06c519d1f652994a2ec3a"} Nov 25 08:01:05 crc kubenswrapper[4482]: I1125 08:01:05.477310 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d6e7d3d80972a494faf8030c4b8154585ddba429de06c519d1f652994a2ec3a" Nov 25 08:01:05 crc kubenswrapper[4482]: I1125 08:01:05.477181 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400961-hsnmw" Nov 25 08:01:09 crc kubenswrapper[4482]: I1125 08:01:09.831068 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:01:09 crc kubenswrapper[4482]: E1125 08:01:09.832367 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:01:22 crc kubenswrapper[4482]: I1125 08:01:22.832439 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:01:22 crc kubenswrapper[4482]: E1125 08:01:22.833553 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:01:37 crc kubenswrapper[4482]: I1125 08:01:37.830993 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:01:37 crc kubenswrapper[4482]: E1125 08:01:37.831744 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:01:52 crc kubenswrapper[4482]: I1125 08:01:52.830409 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:01:52 crc kubenswrapper[4482]: E1125 08:01:52.831247 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:02:05 crc kubenswrapper[4482]: I1125 08:02:05.836047 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:02:05 crc kubenswrapper[4482]: E1125 08:02:05.836708 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:02:19 crc kubenswrapper[4482]: I1125 08:02:19.830690 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:02:19 crc kubenswrapper[4482]: E1125 08:02:19.831339 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:02:33 crc kubenswrapper[4482]: I1125 08:02:33.830989 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:02:33 crc kubenswrapper[4482]: E1125 08:02:33.831688 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:02:48 crc kubenswrapper[4482]: I1125 08:02:48.831211 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:02:48 crc kubenswrapper[4482]: E1125 08:02:48.831866 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:02:59 crc kubenswrapper[4482]: I1125 08:02:59.831510 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:02:59 crc kubenswrapper[4482]: E1125 08:02:59.832031 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.560201 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jcj72"] Nov 25 08:03:04 crc kubenswrapper[4482]: E1125 08:03:04.560878 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cec20e-9b81-4f9d-9954-d777edc6b842" containerName="keystone-cron" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.560891 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cec20e-9b81-4f9d-9954-d777edc6b842" containerName="keystone-cron" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.561268 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cec20e-9b81-4f9d-9954-d777edc6b842" containerName="keystone-cron" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.562657 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.571074 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jcj72"] Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.712968 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044e7485-dd22-47f1-81a5-71f4eb04338f-catalog-content\") pod \"redhat-operators-jcj72\" (UID: \"044e7485-dd22-47f1-81a5-71f4eb04338f\") " pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.713178 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044e7485-dd22-47f1-81a5-71f4eb04338f-utilities\") pod \"redhat-operators-jcj72\" (UID: \"044e7485-dd22-47f1-81a5-71f4eb04338f\") " pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.713440 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2qg\" (UniqueName: \"kubernetes.io/projected/044e7485-dd22-47f1-81a5-71f4eb04338f-kube-api-access-tn2qg\") pod \"redhat-operators-jcj72\" (UID: \"044e7485-dd22-47f1-81a5-71f4eb04338f\") " pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.814245 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044e7485-dd22-47f1-81a5-71f4eb04338f-utilities\") pod \"redhat-operators-jcj72\" (UID: \"044e7485-dd22-47f1-81a5-71f4eb04338f\") " pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.814609 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044e7485-dd22-47f1-81a5-71f4eb04338f-utilities\") pod \"redhat-operators-jcj72\" (UID: \"044e7485-dd22-47f1-81a5-71f4eb04338f\") " pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.814776 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn2qg\" (UniqueName: \"kubernetes.io/projected/044e7485-dd22-47f1-81a5-71f4eb04338f-kube-api-access-tn2qg\") pod \"redhat-operators-jcj72\" (UID: \"044e7485-dd22-47f1-81a5-71f4eb04338f\") " pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.815080 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044e7485-dd22-47f1-81a5-71f4eb04338f-catalog-content\") pod \"redhat-operators-jcj72\" (UID: \"044e7485-dd22-47f1-81a5-71f4eb04338f\") " pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.815333 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044e7485-dd22-47f1-81a5-71f4eb04338f-catalog-content\") pod \"redhat-operators-jcj72\" (UID: \"044e7485-dd22-47f1-81a5-71f4eb04338f\") " pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.833854 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn2qg\" (UniqueName: \"kubernetes.io/projected/044e7485-dd22-47f1-81a5-71f4eb04338f-kube-api-access-tn2qg\") pod \"redhat-operators-jcj72\" (UID: \"044e7485-dd22-47f1-81a5-71f4eb04338f\") " pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:04 crc kubenswrapper[4482]: I1125 08:03:04.884363 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:05 crc kubenswrapper[4482]: W1125 08:03:05.266089 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod044e7485_dd22_47f1_81a5_71f4eb04338f.slice/crio-5315318847b006e88f42e140329c143c0ef146c40613f9c24c834b5679fc9d21 WatchSource:0}: Error finding container 5315318847b006e88f42e140329c143c0ef146c40613f9c24c834b5679fc9d21: Status 404 returned error can't find the container with id 5315318847b006e88f42e140329c143c0ef146c40613f9c24c834b5679fc9d21 Nov 25 08:03:05 crc kubenswrapper[4482]: I1125 08:03:05.268134 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jcj72"] Nov 25 08:03:05 crc kubenswrapper[4482]: I1125 08:03:05.380145 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcj72" event={"ID":"044e7485-dd22-47f1-81a5-71f4eb04338f","Type":"ContainerStarted","Data":"5315318847b006e88f42e140329c143c0ef146c40613f9c24c834b5679fc9d21"} Nov 25 08:03:06 crc kubenswrapper[4482]: I1125 08:03:06.388299 4482 generic.go:334] "Generic (PLEG): container finished" podID="044e7485-dd22-47f1-81a5-71f4eb04338f" containerID="439d76d89233866ecf876c6ec53a2ac0207a12827b66936d8872f82f03c60a1a" exitCode=0 Nov 25 08:03:06 crc kubenswrapper[4482]: I1125 08:03:06.388608 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcj72" event={"ID":"044e7485-dd22-47f1-81a5-71f4eb04338f","Type":"ContainerDied","Data":"439d76d89233866ecf876c6ec53a2ac0207a12827b66936d8872f82f03c60a1a"} Nov 25 08:03:06 crc kubenswrapper[4482]: I1125 08:03:06.390250 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:03:07 crc kubenswrapper[4482]: I1125 08:03:07.398044 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcj72" event={"ID":"044e7485-dd22-47f1-81a5-71f4eb04338f","Type":"ContainerStarted","Data":"74411f534237531c5956f04ce03de9e065f699b56a86f632206a2bdc4cbe84cf"} Nov 25 08:03:09 crc kubenswrapper[4482]: I1125 08:03:09.413789 4482 generic.go:334] "Generic (PLEG): container finished" podID="044e7485-dd22-47f1-81a5-71f4eb04338f" containerID="74411f534237531c5956f04ce03de9e065f699b56a86f632206a2bdc4cbe84cf" exitCode=0 Nov 25 08:03:09 crc kubenswrapper[4482]: I1125 08:03:09.413896 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcj72" event={"ID":"044e7485-dd22-47f1-81a5-71f4eb04338f","Type":"ContainerDied","Data":"74411f534237531c5956f04ce03de9e065f699b56a86f632206a2bdc4cbe84cf"} Nov 25 08:03:10 crc kubenswrapper[4482]: I1125 08:03:10.431604 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcj72" event={"ID":"044e7485-dd22-47f1-81a5-71f4eb04338f","Type":"ContainerStarted","Data":"2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27"} Nov 25 08:03:10 crc kubenswrapper[4482]: I1125 08:03:10.450787 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jcj72" podStartSLOduration=2.924625083 podStartE2EDuration="6.450762214s" podCreationTimestamp="2025-11-25 08:03:04 +0000 UTC" firstStartedPulling="2025-11-25 08:03:06.390049579 +0000 UTC m=+4560.878280838" lastFinishedPulling="2025-11-25 08:03:09.91618671 +0000 UTC m=+4564.404417969" observedRunningTime="2025-11-25 08:03:10.446869827 +0000 UTC m=+4564.935101087" watchObservedRunningTime="2025-11-25 08:03:10.450762214 +0000 UTC m=+4564.938993472" Nov 25 08:03:10 crc kubenswrapper[4482]: I1125 08:03:10.831226 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:03:11 crc kubenswrapper[4482]: I1125 08:03:11.444258 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"b771395cc7f74353b9f17d150b95c27d8459871c82cb05d24be7ce86de7b60a2"} Nov 25 08:03:14 crc kubenswrapper[4482]: I1125 08:03:14.884771 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:14 crc kubenswrapper[4482]: I1125 08:03:14.884992 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:15 crc kubenswrapper[4482]: I1125 08:03:15.919858 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jcj72" podUID="044e7485-dd22-47f1-81a5-71f4eb04338f" containerName="registry-server" probeResult="failure" output=< Nov 25 08:03:15 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 08:03:15 crc kubenswrapper[4482]: > Nov 25 08:03:24 crc kubenswrapper[4482]: I1125 08:03:24.923028 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:24 crc kubenswrapper[4482]: I1125 08:03:24.967284 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:25 crc kubenswrapper[4482]: I1125 08:03:25.154205 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jcj72"] Nov 25 08:03:26 crc kubenswrapper[4482]: I1125 08:03:26.548885 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jcj72" podUID="044e7485-dd22-47f1-81a5-71f4eb04338f" containerName="registry-server" containerID="cri-o://2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27" gracePeriod=2 Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.007441 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.200658 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044e7485-dd22-47f1-81a5-71f4eb04338f-utilities\") pod \"044e7485-dd22-47f1-81a5-71f4eb04338f\" (UID: \"044e7485-dd22-47f1-81a5-71f4eb04338f\") " Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.200724 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn2qg\" (UniqueName: \"kubernetes.io/projected/044e7485-dd22-47f1-81a5-71f4eb04338f-kube-api-access-tn2qg\") pod \"044e7485-dd22-47f1-81a5-71f4eb04338f\" (UID: \"044e7485-dd22-47f1-81a5-71f4eb04338f\") " Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.200757 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044e7485-dd22-47f1-81a5-71f4eb04338f-catalog-content\") pod \"044e7485-dd22-47f1-81a5-71f4eb04338f\" (UID: \"044e7485-dd22-47f1-81a5-71f4eb04338f\") " Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.201299 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/044e7485-dd22-47f1-81a5-71f4eb04338f-utilities" (OuterVolumeSpecName: "utilities") pod "044e7485-dd22-47f1-81a5-71f4eb04338f" (UID: "044e7485-dd22-47f1-81a5-71f4eb04338f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.201651 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044e7485-dd22-47f1-81a5-71f4eb04338f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.208133 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/044e7485-dd22-47f1-81a5-71f4eb04338f-kube-api-access-tn2qg" (OuterVolumeSpecName: "kube-api-access-tn2qg") pod "044e7485-dd22-47f1-81a5-71f4eb04338f" (UID: "044e7485-dd22-47f1-81a5-71f4eb04338f"). InnerVolumeSpecName "kube-api-access-tn2qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.250003 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/044e7485-dd22-47f1-81a5-71f4eb04338f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "044e7485-dd22-47f1-81a5-71f4eb04338f" (UID: "044e7485-dd22-47f1-81a5-71f4eb04338f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.305190 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044e7485-dd22-47f1-81a5-71f4eb04338f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.305224 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn2qg\" (UniqueName: \"kubernetes.io/projected/044e7485-dd22-47f1-81a5-71f4eb04338f-kube-api-access-tn2qg\") on node \"crc\" DevicePath \"\"" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.557571 4482 generic.go:334] "Generic (PLEG): container finished" podID="044e7485-dd22-47f1-81a5-71f4eb04338f" containerID="2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27" exitCode=0 Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.557634 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jcj72" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.557663 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcj72" event={"ID":"044e7485-dd22-47f1-81a5-71f4eb04338f","Type":"ContainerDied","Data":"2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27"} Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.558698 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcj72" event={"ID":"044e7485-dd22-47f1-81a5-71f4eb04338f","Type":"ContainerDied","Data":"5315318847b006e88f42e140329c143c0ef146c40613f9c24c834b5679fc9d21"} Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.558721 4482 scope.go:117] "RemoveContainer" containerID="2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.583542 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jcj72"] Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.583611 4482 scope.go:117] "RemoveContainer" containerID="74411f534237531c5956f04ce03de9e065f699b56a86f632206a2bdc4cbe84cf" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.590874 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jcj72"] Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.612234 4482 scope.go:117] "RemoveContainer" containerID="439d76d89233866ecf876c6ec53a2ac0207a12827b66936d8872f82f03c60a1a" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.633931 4482 scope.go:117] "RemoveContainer" containerID="2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27" Nov 25 08:03:27 crc kubenswrapper[4482]: E1125 08:03:27.634356 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27\": container with ID starting with 2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27 not found: ID does not exist" containerID="2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.634417 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27"} err="failed to get container status \"2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27\": rpc error: code = NotFound desc = could not find container \"2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27\": container with ID starting with 2b6800114bf21459ce8d9368b9e1e2ca7eb363858cf714f612d721ec9054ec27 not found: ID does not exist" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.634447 4482 scope.go:117] "RemoveContainer" containerID="74411f534237531c5956f04ce03de9e065f699b56a86f632206a2bdc4cbe84cf" Nov 25 08:03:27 crc kubenswrapper[4482]: E1125 08:03:27.634898 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74411f534237531c5956f04ce03de9e065f699b56a86f632206a2bdc4cbe84cf\": container with ID starting with 74411f534237531c5956f04ce03de9e065f699b56a86f632206a2bdc4cbe84cf not found: ID does not exist" containerID="74411f534237531c5956f04ce03de9e065f699b56a86f632206a2bdc4cbe84cf" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.634942 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74411f534237531c5956f04ce03de9e065f699b56a86f632206a2bdc4cbe84cf"} err="failed to get container status \"74411f534237531c5956f04ce03de9e065f699b56a86f632206a2bdc4cbe84cf\": rpc error: code = NotFound desc = could not find container \"74411f534237531c5956f04ce03de9e065f699b56a86f632206a2bdc4cbe84cf\": container with ID starting with 74411f534237531c5956f04ce03de9e065f699b56a86f632206a2bdc4cbe84cf not found: ID does not exist" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.634972 4482 scope.go:117] "RemoveContainer" containerID="439d76d89233866ecf876c6ec53a2ac0207a12827b66936d8872f82f03c60a1a" Nov 25 08:03:27 crc kubenswrapper[4482]: E1125 08:03:27.635199 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"439d76d89233866ecf876c6ec53a2ac0207a12827b66936d8872f82f03c60a1a\": container with ID starting with 439d76d89233866ecf876c6ec53a2ac0207a12827b66936d8872f82f03c60a1a not found: ID does not exist" containerID="439d76d89233866ecf876c6ec53a2ac0207a12827b66936d8872f82f03c60a1a" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.635227 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"439d76d89233866ecf876c6ec53a2ac0207a12827b66936d8872f82f03c60a1a"} err="failed to get container status \"439d76d89233866ecf876c6ec53a2ac0207a12827b66936d8872f82f03c60a1a\": rpc error: code = NotFound desc = could not find container \"439d76d89233866ecf876c6ec53a2ac0207a12827b66936d8872f82f03c60a1a\": container with ID starting with 439d76d89233866ecf876c6ec53a2ac0207a12827b66936d8872f82f03c60a1a not found: ID does not exist" Nov 25 08:03:27 crc kubenswrapper[4482]: I1125 08:03:27.840327 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="044e7485-dd22-47f1-81a5-71f4eb04338f" path="/var/lib/kubelet/pods/044e7485-dd22-47f1-81a5-71f4eb04338f/volumes" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.088712 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sllz6"] Nov 25 08:04:12 crc kubenswrapper[4482]: E1125 08:04:12.093549 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044e7485-dd22-47f1-81a5-71f4eb04338f" containerName="extract-content" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.093574 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="044e7485-dd22-47f1-81a5-71f4eb04338f" containerName="extract-content" Nov 25 08:04:12 crc kubenswrapper[4482]: E1125 08:04:12.093616 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044e7485-dd22-47f1-81a5-71f4eb04338f" containerName="extract-utilities" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.093623 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="044e7485-dd22-47f1-81a5-71f4eb04338f" containerName="extract-utilities" Nov 25 08:04:12 crc kubenswrapper[4482]: E1125 08:04:12.093651 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044e7485-dd22-47f1-81a5-71f4eb04338f" containerName="registry-server" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.093657 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="044e7485-dd22-47f1-81a5-71f4eb04338f" containerName="registry-server" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.094133 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="044e7485-dd22-47f1-81a5-71f4eb04338f" containerName="registry-server" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.096694 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.111099 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sllz6"] Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.256925 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678ecfdb-b39c-411b-8032-f335e3017043-catalog-content\") pod \"redhat-marketplace-sllz6\" (UID: \"678ecfdb-b39c-411b-8032-f335e3017043\") " pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.257012 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678ecfdb-b39c-411b-8032-f335e3017043-utilities\") pod \"redhat-marketplace-sllz6\" (UID: \"678ecfdb-b39c-411b-8032-f335e3017043\") " pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.257846 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfcjj\" (UniqueName: \"kubernetes.io/projected/678ecfdb-b39c-411b-8032-f335e3017043-kube-api-access-sfcjj\") pod \"redhat-marketplace-sllz6\" (UID: \"678ecfdb-b39c-411b-8032-f335e3017043\") " pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.359937 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678ecfdb-b39c-411b-8032-f335e3017043-utilities\") pod \"redhat-marketplace-sllz6\" (UID: \"678ecfdb-b39c-411b-8032-f335e3017043\") " pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.359979 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfcjj\" (UniqueName: \"kubernetes.io/projected/678ecfdb-b39c-411b-8032-f335e3017043-kube-api-access-sfcjj\") pod \"redhat-marketplace-sllz6\" (UID: \"678ecfdb-b39c-411b-8032-f335e3017043\") " pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.360101 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678ecfdb-b39c-411b-8032-f335e3017043-catalog-content\") pod \"redhat-marketplace-sllz6\" (UID: \"678ecfdb-b39c-411b-8032-f335e3017043\") " pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.360486 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678ecfdb-b39c-411b-8032-f335e3017043-utilities\") pod \"redhat-marketplace-sllz6\" (UID: \"678ecfdb-b39c-411b-8032-f335e3017043\") " pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.360846 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678ecfdb-b39c-411b-8032-f335e3017043-catalog-content\") pod \"redhat-marketplace-sllz6\" (UID: \"678ecfdb-b39c-411b-8032-f335e3017043\") " pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.379915 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfcjj\" (UniqueName: \"kubernetes.io/projected/678ecfdb-b39c-411b-8032-f335e3017043-kube-api-access-sfcjj\") pod \"redhat-marketplace-sllz6\" (UID: \"678ecfdb-b39c-411b-8032-f335e3017043\") " pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.412425 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.830015 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sllz6"] Nov 25 08:04:12 crc kubenswrapper[4482]: I1125 08:04:12.909352 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sllz6" event={"ID":"678ecfdb-b39c-411b-8032-f335e3017043","Type":"ContainerStarted","Data":"e84d9fc06968c0979526fc00402fe0f022358decba766ffba5faa5e6d80e6b91"} Nov 25 08:04:13 crc kubenswrapper[4482]: I1125 08:04:13.918351 4482 generic.go:334] "Generic (PLEG): container finished" podID="678ecfdb-b39c-411b-8032-f335e3017043" containerID="157217c7417ab22ddc911d0cf782385b6bc3909fcafcc41a67218b3c2ab55fb1" exitCode=0 Nov 25 08:04:13 crc kubenswrapper[4482]: I1125 08:04:13.918408 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sllz6" event={"ID":"678ecfdb-b39c-411b-8032-f335e3017043","Type":"ContainerDied","Data":"157217c7417ab22ddc911d0cf782385b6bc3909fcafcc41a67218b3c2ab55fb1"} Nov 25 08:04:14 crc kubenswrapper[4482]: I1125 08:04:14.928939 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sllz6" event={"ID":"678ecfdb-b39c-411b-8032-f335e3017043","Type":"ContainerStarted","Data":"d79fbca2cbd41e96b7c588b847e1278f9f5e2f5dc8a9ae377ffe26c0f65929c5"} Nov 25 08:04:15 crc kubenswrapper[4482]: I1125 08:04:15.937136 4482 generic.go:334] "Generic (PLEG): container finished" podID="678ecfdb-b39c-411b-8032-f335e3017043" containerID="d79fbca2cbd41e96b7c588b847e1278f9f5e2f5dc8a9ae377ffe26c0f65929c5" exitCode=0 Nov 25 08:04:15 crc kubenswrapper[4482]: I1125 08:04:15.937194 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sllz6" event={"ID":"678ecfdb-b39c-411b-8032-f335e3017043","Type":"ContainerDied","Data":"d79fbca2cbd41e96b7c588b847e1278f9f5e2f5dc8a9ae377ffe26c0f65929c5"} Nov 25 08:04:16 crc kubenswrapper[4482]: I1125 08:04:16.945384 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sllz6" event={"ID":"678ecfdb-b39c-411b-8032-f335e3017043","Type":"ContainerStarted","Data":"62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4"} Nov 25 08:04:16 crc kubenswrapper[4482]: I1125 08:04:16.967177 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sllz6" podStartSLOduration=2.41333768 podStartE2EDuration="4.967153886s" podCreationTimestamp="2025-11-25 08:04:12 +0000 UTC" firstStartedPulling="2025-11-25 08:04:13.919822655 +0000 UTC m=+4628.408053905" lastFinishedPulling="2025-11-25 08:04:16.473638851 +0000 UTC m=+4630.961870111" observedRunningTime="2025-11-25 08:04:16.9583063 +0000 UTC m=+4631.446537560" watchObservedRunningTime="2025-11-25 08:04:16.967153886 +0000 UTC m=+4631.455385134" Nov 25 08:04:22 crc kubenswrapper[4482]: I1125 08:04:22.412927 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:22 crc kubenswrapper[4482]: I1125 08:04:22.413657 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:22 crc kubenswrapper[4482]: I1125 08:04:22.449722 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:23 crc kubenswrapper[4482]: I1125 08:04:23.020977 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:23 crc kubenswrapper[4482]: I1125 08:04:23.064477 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sllz6"] Nov 25 08:04:24 crc kubenswrapper[4482]: I1125 08:04:24.999702 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sllz6" podUID="678ecfdb-b39c-411b-8032-f335e3017043" containerName="registry-server" containerID="cri-o://62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4" gracePeriod=2 Nov 25 08:04:25 crc kubenswrapper[4482]: I1125 08:04:25.522370 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:25 crc kubenswrapper[4482]: I1125 08:04:25.602313 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678ecfdb-b39c-411b-8032-f335e3017043-catalog-content\") pod \"678ecfdb-b39c-411b-8032-f335e3017043\" (UID: \"678ecfdb-b39c-411b-8032-f335e3017043\") " Nov 25 08:04:25 crc kubenswrapper[4482]: I1125 08:04:25.602392 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfcjj\" (UniqueName: \"kubernetes.io/projected/678ecfdb-b39c-411b-8032-f335e3017043-kube-api-access-sfcjj\") pod \"678ecfdb-b39c-411b-8032-f335e3017043\" (UID: \"678ecfdb-b39c-411b-8032-f335e3017043\") " Nov 25 08:04:25 crc kubenswrapper[4482]: I1125 08:04:25.602453 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678ecfdb-b39c-411b-8032-f335e3017043-utilities\") pod \"678ecfdb-b39c-411b-8032-f335e3017043\" (UID: \"678ecfdb-b39c-411b-8032-f335e3017043\") " Nov 25 08:04:25 crc kubenswrapper[4482]: I1125 08:04:25.603094 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/678ecfdb-b39c-411b-8032-f335e3017043-utilities" (OuterVolumeSpecName: "utilities") pod "678ecfdb-b39c-411b-8032-f335e3017043" (UID: "678ecfdb-b39c-411b-8032-f335e3017043"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:04:25 crc kubenswrapper[4482]: I1125 08:04:25.603502 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678ecfdb-b39c-411b-8032-f335e3017043-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:04:25 crc kubenswrapper[4482]: I1125 08:04:25.607761 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678ecfdb-b39c-411b-8032-f335e3017043-kube-api-access-sfcjj" (OuterVolumeSpecName: "kube-api-access-sfcjj") pod "678ecfdb-b39c-411b-8032-f335e3017043" (UID: "678ecfdb-b39c-411b-8032-f335e3017043"). InnerVolumeSpecName "kube-api-access-sfcjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:04:25 crc kubenswrapper[4482]: I1125 08:04:25.615244 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/678ecfdb-b39c-411b-8032-f335e3017043-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "678ecfdb-b39c-411b-8032-f335e3017043" (UID: "678ecfdb-b39c-411b-8032-f335e3017043"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:04:25 crc kubenswrapper[4482]: I1125 08:04:25.705017 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678ecfdb-b39c-411b-8032-f335e3017043-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:04:25 crc kubenswrapper[4482]: I1125 08:04:25.705050 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfcjj\" (UniqueName: \"kubernetes.io/projected/678ecfdb-b39c-411b-8032-f335e3017043-kube-api-access-sfcjj\") on node \"crc\" DevicePath \"\"" Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.010595 4482 generic.go:334] "Generic (PLEG): container finished" podID="678ecfdb-b39c-411b-8032-f335e3017043" containerID="62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4" exitCode=0 Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.010641 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sllz6" event={"ID":"678ecfdb-b39c-411b-8032-f335e3017043","Type":"ContainerDied","Data":"62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4"} Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.010675 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sllz6" Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.010705 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sllz6" event={"ID":"678ecfdb-b39c-411b-8032-f335e3017043","Type":"ContainerDied","Data":"e84d9fc06968c0979526fc00402fe0f022358decba766ffba5faa5e6d80e6b91"} Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.010727 4482 scope.go:117] "RemoveContainer" containerID="62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4" Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.028506 4482 scope.go:117] "RemoveContainer" containerID="d79fbca2cbd41e96b7c588b847e1278f9f5e2f5dc8a9ae377ffe26c0f65929c5" Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.032900 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sllz6"] Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.047415 4482 scope.go:117] "RemoveContainer" containerID="157217c7417ab22ddc911d0cf782385b6bc3909fcafcc41a67218b3c2ab55fb1" Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.053325 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sllz6"] Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.088935 4482 scope.go:117] "RemoveContainer" containerID="62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4" Nov 25 08:04:26 crc kubenswrapper[4482]: E1125 08:04:26.090066 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4\": container with ID starting with 62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4 not found: ID does not exist" containerID="62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4" Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.090110 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4"} err="failed to get container status \"62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4\": rpc error: code = NotFound desc = could not find container \"62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4\": container with ID starting with 62c28f8c2e716903990856cffd7c8f59b1a806197f78943e3f00c05af1312de4 not found: ID does not exist" Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.090134 4482 scope.go:117] "RemoveContainer" containerID="d79fbca2cbd41e96b7c588b847e1278f9f5e2f5dc8a9ae377ffe26c0f65929c5" Nov 25 08:04:26 crc kubenswrapper[4482]: E1125 08:04:26.090484 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79fbca2cbd41e96b7c588b847e1278f9f5e2f5dc8a9ae377ffe26c0f65929c5\": container with ID starting with d79fbca2cbd41e96b7c588b847e1278f9f5e2f5dc8a9ae377ffe26c0f65929c5 not found: ID does not exist" containerID="d79fbca2cbd41e96b7c588b847e1278f9f5e2f5dc8a9ae377ffe26c0f65929c5" Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.090522 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79fbca2cbd41e96b7c588b847e1278f9f5e2f5dc8a9ae377ffe26c0f65929c5"} err="failed to get container status \"d79fbca2cbd41e96b7c588b847e1278f9f5e2f5dc8a9ae377ffe26c0f65929c5\": rpc error: code = NotFound desc = could not find container \"d79fbca2cbd41e96b7c588b847e1278f9f5e2f5dc8a9ae377ffe26c0f65929c5\": container with ID starting with d79fbca2cbd41e96b7c588b847e1278f9f5e2f5dc8a9ae377ffe26c0f65929c5 not found: ID does not exist" Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.090551 4482 scope.go:117] "RemoveContainer" containerID="157217c7417ab22ddc911d0cf782385b6bc3909fcafcc41a67218b3c2ab55fb1" Nov 25 08:04:26 crc kubenswrapper[4482]: E1125 08:04:26.090836 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157217c7417ab22ddc911d0cf782385b6bc3909fcafcc41a67218b3c2ab55fb1\": container with ID starting with 157217c7417ab22ddc911d0cf782385b6bc3909fcafcc41a67218b3c2ab55fb1 not found: ID does not exist" containerID="157217c7417ab22ddc911d0cf782385b6bc3909fcafcc41a67218b3c2ab55fb1" Nov 25 08:04:26 crc kubenswrapper[4482]: I1125 08:04:26.090860 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157217c7417ab22ddc911d0cf782385b6bc3909fcafcc41a67218b3c2ab55fb1"} err="failed to get container status \"157217c7417ab22ddc911d0cf782385b6bc3909fcafcc41a67218b3c2ab55fb1\": rpc error: code = NotFound desc = could not find container \"157217c7417ab22ddc911d0cf782385b6bc3909fcafcc41a67218b3c2ab55fb1\": container with ID starting with 157217c7417ab22ddc911d0cf782385b6bc3909fcafcc41a67218b3c2ab55fb1 not found: ID does not exist" Nov 25 08:04:27 crc kubenswrapper[4482]: I1125 08:04:27.839391 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678ecfdb-b39c-411b-8032-f335e3017043" path="/var/lib/kubelet/pods/678ecfdb-b39c-411b-8032-f335e3017043/volumes" Nov 25 08:05:39 crc kubenswrapper[4482]: I1125 08:05:39.118261 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:05:39 crc kubenswrapper[4482]: I1125 08:05:39.118651 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:06:09 crc kubenswrapper[4482]: I1125 08:06:09.117989 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:06:09 crc kubenswrapper[4482]: I1125 08:06:09.120275 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:06:39 crc kubenswrapper[4482]: I1125 08:06:39.117933 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:06:39 crc kubenswrapper[4482]: I1125 08:06:39.118699 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:06:39 crc kubenswrapper[4482]: I1125 08:06:39.118762 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 08:06:39 crc kubenswrapper[4482]: I1125 08:06:39.119566 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b771395cc7f74353b9f17d150b95c27d8459871c82cb05d24be7ce86de7b60a2"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:06:39 crc kubenswrapper[4482]: I1125 08:06:39.119628 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://b771395cc7f74353b9f17d150b95c27d8459871c82cb05d24be7ce86de7b60a2" gracePeriod=600 Nov 25 08:06:40 crc kubenswrapper[4482]: I1125 08:06:40.029306 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="b771395cc7f74353b9f17d150b95c27d8459871c82cb05d24be7ce86de7b60a2" exitCode=0 Nov 25 08:06:40 crc kubenswrapper[4482]: I1125 08:06:40.029341 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"b771395cc7f74353b9f17d150b95c27d8459871c82cb05d24be7ce86de7b60a2"} Nov 25 08:06:40 crc kubenswrapper[4482]: I1125 08:06:40.030279 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab"} Nov 25 08:06:40 crc kubenswrapper[4482]: I1125 08:06:40.030320 4482 scope.go:117] "RemoveContainer" containerID="f757b32322334ce09e6169c7bec724c2911e23eca44e1d68c785637e48535f78" Nov 25 08:07:32 crc kubenswrapper[4482]: I1125 08:07:32.493862 4482 generic.go:334] "Generic (PLEG): container finished" podID="da456db2-5bd8-40d0-a229-036a6f9b95f7" containerID="2138fc733930429e75b5b9925fbb6449f1437945455fc818dc278cc3afcb0d01" exitCode=0 Nov 25 08:07:32 crc kubenswrapper[4482]: I1125 08:07:32.493986 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"da456db2-5bd8-40d0-a229-036a6f9b95f7","Type":"ContainerDied","Data":"2138fc733930429e75b5b9925fbb6449f1437945455fc818dc278cc3afcb0d01"} Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.091415 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.177826 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Nov 25 08:07:34 crc kubenswrapper[4482]: E1125 08:07:34.178287 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678ecfdb-b39c-411b-8032-f335e3017043" containerName="extract-utilities" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.178308 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="678ecfdb-b39c-411b-8032-f335e3017043" containerName="extract-utilities" Nov 25 08:07:34 crc kubenswrapper[4482]: E1125 08:07:34.178334 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678ecfdb-b39c-411b-8032-f335e3017043" containerName="extract-content" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.178340 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="678ecfdb-b39c-411b-8032-f335e3017043" containerName="extract-content" Nov 25 08:07:34 crc kubenswrapper[4482]: E1125 08:07:34.178370 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da456db2-5bd8-40d0-a229-036a6f9b95f7" containerName="tempest-tests-tempest-tests-runner" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.178375 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="da456db2-5bd8-40d0-a229-036a6f9b95f7" containerName="tempest-tests-tempest-tests-runner" Nov 25 08:07:34 crc kubenswrapper[4482]: E1125 08:07:34.178387 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678ecfdb-b39c-411b-8032-f335e3017043" containerName="registry-server" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.178394 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="678ecfdb-b39c-411b-8032-f335e3017043" containerName="registry-server" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.178594 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="da456db2-5bd8-40d0-a229-036a6f9b95f7" containerName="tempest-tests-tempest-tests-runner" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.178624 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="678ecfdb-b39c-411b-8032-f335e3017043" containerName="registry-server" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.179297 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.180869 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.184010 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.186743 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.200940 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-openstack-config-secret\") pod \"da456db2-5bd8-40d0-a229-036a6f9b95f7\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.201356 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/da456db2-5bd8-40d0-a229-036a6f9b95f7-openstack-config\") pod \"da456db2-5bd8-40d0-a229-036a6f9b95f7\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.201417 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/da456db2-5bd8-40d0-a229-036a6f9b95f7-test-operator-ephemeral-workdir\") pod \"da456db2-5bd8-40d0-a229-036a6f9b95f7\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.201491 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/da456db2-5bd8-40d0-a229-036a6f9b95f7-test-operator-ephemeral-temporary\") pod \"da456db2-5bd8-40d0-a229-036a6f9b95f7\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.201577 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-ssh-key\") pod \"da456db2-5bd8-40d0-a229-036a6f9b95f7\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.201604 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da456db2-5bd8-40d0-a229-036a6f9b95f7-config-data\") pod \"da456db2-5bd8-40d0-a229-036a6f9b95f7\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.201638 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87zgz\" (UniqueName: \"kubernetes.io/projected/da456db2-5bd8-40d0-a229-036a6f9b95f7-kube-api-access-87zgz\") pod \"da456db2-5bd8-40d0-a229-036a6f9b95f7\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.201711 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"da456db2-5bd8-40d0-a229-036a6f9b95f7\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.201739 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-ca-certs\") pod \"da456db2-5bd8-40d0-a229-036a6f9b95f7\" (UID: \"da456db2-5bd8-40d0-a229-036a6f9b95f7\") " Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.202270 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.202334 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.202455 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.203050 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da456db2-5bd8-40d0-a229-036a6f9b95f7-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "da456db2-5bd8-40d0-a229-036a6f9b95f7" (UID: "da456db2-5bd8-40d0-a229-036a6f9b95f7"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.204749 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da456db2-5bd8-40d0-a229-036a6f9b95f7-config-data" (OuterVolumeSpecName: "config-data") pod "da456db2-5bd8-40d0-a229-036a6f9b95f7" (UID: "da456db2-5bd8-40d0-a229-036a6f9b95f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.211284 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da456db2-5bd8-40d0-a229-036a6f9b95f7-kube-api-access-87zgz" (OuterVolumeSpecName: "kube-api-access-87zgz") pod "da456db2-5bd8-40d0-a229-036a6f9b95f7" (UID: "da456db2-5bd8-40d0-a229-036a6f9b95f7"). InnerVolumeSpecName "kube-api-access-87zgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.215137 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "da456db2-5bd8-40d0-a229-036a6f9b95f7" (UID: "da456db2-5bd8-40d0-a229-036a6f9b95f7"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.235732 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "da456db2-5bd8-40d0-a229-036a6f9b95f7" (UID: "da456db2-5bd8-40d0-a229-036a6f9b95f7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.237281 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "da456db2-5bd8-40d0-a229-036a6f9b95f7" (UID: "da456db2-5bd8-40d0-a229-036a6f9b95f7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.248540 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "da456db2-5bd8-40d0-a229-036a6f9b95f7" (UID: "da456db2-5bd8-40d0-a229-036a6f9b95f7"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.248869 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da456db2-5bd8-40d0-a229-036a6f9b95f7-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "da456db2-5bd8-40d0-a229-036a6f9b95f7" (UID: "da456db2-5bd8-40d0-a229-036a6f9b95f7"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.256780 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da456db2-5bd8-40d0-a229-036a6f9b95f7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "da456db2-5bd8-40d0-a229-036a6f9b95f7" (UID: "da456db2-5bd8-40d0-a229-036a6f9b95f7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.304461 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k67l4\" (UniqueName: \"kubernetes.io/projected/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-kube-api-access-k67l4\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.304539 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.304613 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.304663 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.304681 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.304715 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.304766 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.304802 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.304836 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.304937 4482 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.305008 4482 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/da456db2-5bd8-40d0-a229-036a6f9b95f7-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.305132 4482 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/da456db2-5bd8-40d0-a229-036a6f9b95f7-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.305833 4482 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/da456db2-5bd8-40d0-a229-036a6f9b95f7-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.305931 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.305948 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.305992 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da456db2-5bd8-40d0-a229-036a6f9b95f7-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.306013 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87zgz\" (UniqueName: \"kubernetes.io/projected/da456db2-5bd8-40d0-a229-036a6f9b95f7-kube-api-access-87zgz\") on node \"crc\" DevicePath \"\"" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.306027 4482 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/da456db2-5bd8-40d0-a229-036a6f9b95f7-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.306079 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.314601 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.337679 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.407997 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k67l4\" (UniqueName: \"kubernetes.io/projected/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-kube-api-access-k67l4\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.408084 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.408155 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.408212 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.408264 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.408965 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.409031 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.411528 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.411588 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.426333 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k67l4\" (UniqueName: \"kubernetes.io/projected/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-kube-api-access-k67l4\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.493131 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.524490 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"da456db2-5bd8-40d0-a229-036a6f9b95f7","Type":"ContainerDied","Data":"7c6e54c13b8c50c03c7d9186500c1891d89f7d0972bc0b96de272880977a6b3c"} Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.524543 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c6e54c13b8c50c03c7d9186500c1891d89f7d0972bc0b96de272880977a6b3c" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.524562 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Nov 25 08:07:34 crc kubenswrapper[4482]: I1125 08:07:34.994521 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Nov 25 08:07:35 crc kubenswrapper[4482]: I1125 08:07:35.534503 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"2d7f601b-273a-4af7-8c8f-a6c60ebf212b","Type":"ContainerStarted","Data":"aff0e6bcad370e6346ed512679533c1190ed0e4616768b3963c70b5568abf1cb"} Nov 25 08:07:37 crc kubenswrapper[4482]: I1125 08:07:37.557904 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"2d7f601b-273a-4af7-8c8f-a6c60ebf212b","Type":"ContainerStarted","Data":"82592a3065f5995230540d32b2108ef2ebc524bb729162a8af220009996b856a"} Nov 25 08:07:37 crc kubenswrapper[4482]: I1125 08:07:37.582226 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=3.582210167 podStartE2EDuration="3.582210167s" podCreationTimestamp="2025-11-25 08:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:07:37.57301181 +0000 UTC m=+4832.061243069" watchObservedRunningTime="2025-11-25 08:07:37.582210167 +0000 UTC m=+4832.070441425" Nov 25 08:08:11 crc kubenswrapper[4482]: I1125 08:08:11.680188 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4cdkd"] Nov 25 08:08:11 crc kubenswrapper[4482]: I1125 08:08:11.683667 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:11 crc kubenswrapper[4482]: I1125 08:08:11.692231 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4225aa99-466b-4b5f-9138-ff1ae17a5b15-utilities\") pod \"certified-operators-4cdkd\" (UID: \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\") " pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:11 crc kubenswrapper[4482]: I1125 08:08:11.692330 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4225aa99-466b-4b5f-9138-ff1ae17a5b15-catalog-content\") pod \"certified-operators-4cdkd\" (UID: \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\") " pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:11 crc kubenswrapper[4482]: I1125 08:08:11.692457 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6v8\" (UniqueName: \"kubernetes.io/projected/4225aa99-466b-4b5f-9138-ff1ae17a5b15-kube-api-access-fm6v8\") pod \"certified-operators-4cdkd\" (UID: \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\") " pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:11 crc kubenswrapper[4482]: I1125 08:08:11.703852 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4cdkd"] Nov 25 08:08:11 crc kubenswrapper[4482]: I1125 08:08:11.796041 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm6v8\" (UniqueName: \"kubernetes.io/projected/4225aa99-466b-4b5f-9138-ff1ae17a5b15-kube-api-access-fm6v8\") pod \"certified-operators-4cdkd\" (UID: \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\") " pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:11 crc kubenswrapper[4482]: I1125 08:08:11.796229 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4225aa99-466b-4b5f-9138-ff1ae17a5b15-utilities\") pod \"certified-operators-4cdkd\" (UID: \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\") " pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:11 crc kubenswrapper[4482]: I1125 08:08:11.796357 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4225aa99-466b-4b5f-9138-ff1ae17a5b15-catalog-content\") pod \"certified-operators-4cdkd\" (UID: \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\") " pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:11 crc kubenswrapper[4482]: I1125 08:08:11.796978 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4225aa99-466b-4b5f-9138-ff1ae17a5b15-catalog-content\") pod \"certified-operators-4cdkd\" (UID: \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\") " pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:11 crc kubenswrapper[4482]: I1125 08:08:11.797562 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4225aa99-466b-4b5f-9138-ff1ae17a5b15-utilities\") pod \"certified-operators-4cdkd\" (UID: \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\") " pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:11 crc kubenswrapper[4482]: I1125 08:08:11.907513 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm6v8\" (UniqueName: \"kubernetes.io/projected/4225aa99-466b-4b5f-9138-ff1ae17a5b15-kube-api-access-fm6v8\") pod \"certified-operators-4cdkd\" (UID: \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\") " pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:12 crc kubenswrapper[4482]: I1125 08:08:12.012836 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:12 crc kubenswrapper[4482]: I1125 08:08:12.481002 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4cdkd"] Nov 25 08:08:12 crc kubenswrapper[4482]: I1125 08:08:12.898861 4482 generic.go:334] "Generic (PLEG): container finished" podID="4225aa99-466b-4b5f-9138-ff1ae17a5b15" containerID="232b261fffae7b7d78b0347c5263d5cfed94460c9592117c8df93d252e467d0d" exitCode=0 Nov 25 08:08:12 crc kubenswrapper[4482]: I1125 08:08:12.898959 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4cdkd" event={"ID":"4225aa99-466b-4b5f-9138-ff1ae17a5b15","Type":"ContainerDied","Data":"232b261fffae7b7d78b0347c5263d5cfed94460c9592117c8df93d252e467d0d"} Nov 25 08:08:12 crc kubenswrapper[4482]: I1125 08:08:12.901500 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4cdkd" event={"ID":"4225aa99-466b-4b5f-9138-ff1ae17a5b15","Type":"ContainerStarted","Data":"8f8b6a36f1c676f0870da7ba3dc712a8d3bb3e106aebd45e7ebd371f793d7d49"} Nov 25 08:08:12 crc kubenswrapper[4482]: I1125 08:08:12.902215 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:08:13 crc kubenswrapper[4482]: I1125 08:08:13.911544 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4cdkd" event={"ID":"4225aa99-466b-4b5f-9138-ff1ae17a5b15","Type":"ContainerStarted","Data":"77d66846e76ebc61324eaf149bf0460c66cc276f135e169e9ac7b7a24d341d54"} Nov 25 08:08:14 crc kubenswrapper[4482]: I1125 08:08:14.925014 4482 generic.go:334] "Generic (PLEG): container finished" podID="4225aa99-466b-4b5f-9138-ff1ae17a5b15" containerID="77d66846e76ebc61324eaf149bf0460c66cc276f135e169e9ac7b7a24d341d54" exitCode=0 Nov 25 08:08:14 crc kubenswrapper[4482]: I1125 08:08:14.925071 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4cdkd" event={"ID":"4225aa99-466b-4b5f-9138-ff1ae17a5b15","Type":"ContainerDied","Data":"77d66846e76ebc61324eaf149bf0460c66cc276f135e169e9ac7b7a24d341d54"} Nov 25 08:08:15 crc kubenswrapper[4482]: I1125 08:08:15.935477 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4cdkd" event={"ID":"4225aa99-466b-4b5f-9138-ff1ae17a5b15","Type":"ContainerStarted","Data":"c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4"} Nov 25 08:08:15 crc kubenswrapper[4482]: I1125 08:08:15.956081 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4cdkd" podStartSLOduration=2.4751364909999998 podStartE2EDuration="4.956053619s" podCreationTimestamp="2025-11-25 08:08:11 +0000 UTC" firstStartedPulling="2025-11-25 08:08:12.901382644 +0000 UTC m=+4867.389613894" lastFinishedPulling="2025-11-25 08:08:15.382299763 +0000 UTC m=+4869.870531022" observedRunningTime="2025-11-25 08:08:15.954692954 +0000 UTC m=+4870.442924203" watchObservedRunningTime="2025-11-25 08:08:15.956053619 +0000 UTC m=+4870.444284877" Nov 25 08:08:21 crc kubenswrapper[4482]: E1125 08:08:21.802019 4482 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.26.133:43352->192.168.26.133:42749: read tcp 192.168.26.133:43352->192.168.26.133:42749: read: connection reset by peer Nov 25 08:08:22 crc kubenswrapper[4482]: I1125 08:08:22.013355 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:22 crc kubenswrapper[4482]: I1125 08:08:22.013606 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:22 crc kubenswrapper[4482]: I1125 08:08:22.050363 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:22 crc kubenswrapper[4482]: I1125 08:08:22.889307 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b7cdd7c85-7hng5"] Nov 25 08:08:22 crc kubenswrapper[4482]: I1125 08:08:22.891655 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:22 crc kubenswrapper[4482]: I1125 08:08:22.978724 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b7cdd7c85-7hng5"] Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.045397 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5zrgg"] Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.047283 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.050915 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-combined-ca-bundle\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.051070 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-httpd-config\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.051154 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-ovndb-tls-certs\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.051221 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-public-tls-certs\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.051293 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpppq\" (UniqueName: \"kubernetes.io/projected/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-kube-api-access-qpppq\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.051318 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-config\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.051392 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-internal-tls-certs\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.054321 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5zrgg"] Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.091818 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.153353 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-public-tls-certs\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.153460 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpppq\" (UniqueName: \"kubernetes.io/projected/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-kube-api-access-qpppq\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.153486 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-config\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.153519 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633da329-9386-4344-a6a9-5538401ca23a-catalog-content\") pod \"community-operators-5zrgg\" (UID: \"633da329-9386-4344-a6a9-5538401ca23a\") " pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.153590 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-internal-tls-certs\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.153673 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-combined-ca-bundle\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.153761 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633da329-9386-4344-a6a9-5538401ca23a-utilities\") pod \"community-operators-5zrgg\" (UID: \"633da329-9386-4344-a6a9-5538401ca23a\") " pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.153798 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-httpd-config\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.153863 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-ovndb-tls-certs\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.153884 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxx8l\" (UniqueName: \"kubernetes.io/projected/633da329-9386-4344-a6a9-5538401ca23a-kube-api-access-lxx8l\") pod \"community-operators-5zrgg\" (UID: \"633da329-9386-4344-a6a9-5538401ca23a\") " pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.161456 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-internal-tls-certs\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.167665 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-combined-ca-bundle\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.167681 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-httpd-config\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.173877 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-config\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.174378 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-public-tls-certs\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.184656 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-ovndb-tls-certs\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.201294 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpppq\" (UniqueName: \"kubernetes.io/projected/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-kube-api-access-qpppq\") pod \"neutron-b7cdd7c85-7hng5\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.235546 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.256875 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxx8l\" (UniqueName: \"kubernetes.io/projected/633da329-9386-4344-a6a9-5538401ca23a-kube-api-access-lxx8l\") pod \"community-operators-5zrgg\" (UID: \"633da329-9386-4344-a6a9-5538401ca23a\") " pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.257158 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633da329-9386-4344-a6a9-5538401ca23a-catalog-content\") pod \"community-operators-5zrgg\" (UID: \"633da329-9386-4344-a6a9-5538401ca23a\") " pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.257490 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633da329-9386-4344-a6a9-5538401ca23a-utilities\") pod \"community-operators-5zrgg\" (UID: \"633da329-9386-4344-a6a9-5538401ca23a\") " pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.257988 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633da329-9386-4344-a6a9-5538401ca23a-utilities\") pod \"community-operators-5zrgg\" (UID: \"633da329-9386-4344-a6a9-5538401ca23a\") " pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.258645 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633da329-9386-4344-a6a9-5538401ca23a-catalog-content\") pod \"community-operators-5zrgg\" (UID: \"633da329-9386-4344-a6a9-5538401ca23a\") " pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.275678 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxx8l\" (UniqueName: \"kubernetes.io/projected/633da329-9386-4344-a6a9-5538401ca23a-kube-api-access-lxx8l\") pod \"community-operators-5zrgg\" (UID: \"633da329-9386-4344-a6a9-5538401ca23a\") " pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.446380 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.860789 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b7cdd7c85-7hng5"] Nov 25 08:08:23 crc kubenswrapper[4482]: I1125 08:08:23.956457 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5zrgg"] Nov 25 08:08:24 crc kubenswrapper[4482]: I1125 08:08:24.018032 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zrgg" event={"ID":"633da329-9386-4344-a6a9-5538401ca23a","Type":"ContainerStarted","Data":"8e1496d218be77a86ca15b1209d1b1a4484bb445a0ba524e176a40f81d87e9b7"} Nov 25 08:08:24 crc kubenswrapper[4482]: I1125 08:08:24.021849 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7cdd7c85-7hng5" event={"ID":"a4ae615c-dd7c-4ffe-968d-369d0b26c25b","Type":"ContainerStarted","Data":"45f26c1c640192f8a01076c765be8918b5cc3aac615303140a2a593f67c13bd7"} Nov 25 08:08:25 crc kubenswrapper[4482]: I1125 08:08:25.032836 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7cdd7c85-7hng5" event={"ID":"a4ae615c-dd7c-4ffe-968d-369d0b26c25b","Type":"ContainerStarted","Data":"b324c6af9c636d86140c0c634f9be6bad4477f7497f7bd026efa1e667c704d70"} Nov 25 08:08:25 crc kubenswrapper[4482]: I1125 08:08:25.033631 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7cdd7c85-7hng5" event={"ID":"a4ae615c-dd7c-4ffe-968d-369d0b26c25b","Type":"ContainerStarted","Data":"2f8fd4653c37e5bd6ca2f5d9641fd6668159c731193eefe862e2802305759e0c"} Nov 25 08:08:25 crc kubenswrapper[4482]: I1125 08:08:25.033705 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:25 crc kubenswrapper[4482]: I1125 08:08:25.034632 4482 generic.go:334] "Generic (PLEG): container finished" podID="633da329-9386-4344-a6a9-5538401ca23a" containerID="e60745c7a25a01910deab3570c3eb4df917b17071f55a9938b04294c1650e5a8" exitCode=0 Nov 25 08:08:25 crc kubenswrapper[4482]: I1125 08:08:25.034682 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zrgg" event={"ID":"633da329-9386-4344-a6a9-5538401ca23a","Type":"ContainerDied","Data":"e60745c7a25a01910deab3570c3eb4df917b17071f55a9938b04294c1650e5a8"} Nov 25 08:08:25 crc kubenswrapper[4482]: I1125 08:08:25.062630 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b7cdd7c85-7hng5" podStartSLOduration=3.062608176 podStartE2EDuration="3.062608176s" podCreationTimestamp="2025-11-25 08:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:08:25.055892318 +0000 UTC m=+4879.544123577" watchObservedRunningTime="2025-11-25 08:08:25.062608176 +0000 UTC m=+4879.550839435" Nov 25 08:08:25 crc kubenswrapper[4482]: I1125 08:08:25.484612 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4cdkd"] Nov 25 08:08:25 crc kubenswrapper[4482]: I1125 08:08:25.485132 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4cdkd" podUID="4225aa99-466b-4b5f-9138-ff1ae17a5b15" containerName="registry-server" containerID="cri-o://c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4" gracePeriod=2 Nov 25 08:08:25 crc kubenswrapper[4482]: I1125 08:08:25.968064 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.047207 4482 generic.go:334] "Generic (PLEG): container finished" podID="4225aa99-466b-4b5f-9138-ff1ae17a5b15" containerID="c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4" exitCode=0 Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.047287 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4cdkd" event={"ID":"4225aa99-466b-4b5f-9138-ff1ae17a5b15","Type":"ContainerDied","Data":"c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4"} Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.047324 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4cdkd" event={"ID":"4225aa99-466b-4b5f-9138-ff1ae17a5b15","Type":"ContainerDied","Data":"8f8b6a36f1c676f0870da7ba3dc712a8d3bb3e106aebd45e7ebd371f793d7d49"} Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.047367 4482 scope.go:117] "RemoveContainer" containerID="c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.047631 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4cdkd" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.067999 4482 scope.go:117] "RemoveContainer" containerID="77d66846e76ebc61324eaf149bf0460c66cc276f135e169e9ac7b7a24d341d54" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.088939 4482 scope.go:117] "RemoveContainer" containerID="232b261fffae7b7d78b0347c5263d5cfed94460c9592117c8df93d252e467d0d" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.128873 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm6v8\" (UniqueName: \"kubernetes.io/projected/4225aa99-466b-4b5f-9138-ff1ae17a5b15-kube-api-access-fm6v8\") pod \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\" (UID: \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\") " Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.129044 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4225aa99-466b-4b5f-9138-ff1ae17a5b15-catalog-content\") pod \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\" (UID: \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\") " Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.129276 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4225aa99-466b-4b5f-9138-ff1ae17a5b15-utilities\") pod \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\" (UID: \"4225aa99-466b-4b5f-9138-ff1ae17a5b15\") " Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.130710 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4225aa99-466b-4b5f-9138-ff1ae17a5b15-utilities" (OuterVolumeSpecName: "utilities") pod "4225aa99-466b-4b5f-9138-ff1ae17a5b15" (UID: "4225aa99-466b-4b5f-9138-ff1ae17a5b15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.133924 4482 scope.go:117] "RemoveContainer" containerID="c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4" Nov 25 08:08:26 crc kubenswrapper[4482]: E1125 08:08:26.134842 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4\": container with ID starting with c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4 not found: ID does not exist" containerID="c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.134870 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4"} err="failed to get container status \"c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4\": rpc error: code = NotFound desc = could not find container \"c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4\": container with ID starting with c0648dd792038668ddd86fabe34a7c070515d27697cddda47dd6ff218b3acee4 not found: ID does not exist" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.134893 4482 scope.go:117] "RemoveContainer" containerID="77d66846e76ebc61324eaf149bf0460c66cc276f135e169e9ac7b7a24d341d54" Nov 25 08:08:26 crc kubenswrapper[4482]: E1125 08:08:26.135284 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d66846e76ebc61324eaf149bf0460c66cc276f135e169e9ac7b7a24d341d54\": container with ID starting with 77d66846e76ebc61324eaf149bf0460c66cc276f135e169e9ac7b7a24d341d54 not found: ID does not exist" containerID="77d66846e76ebc61324eaf149bf0460c66cc276f135e169e9ac7b7a24d341d54" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.135307 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d66846e76ebc61324eaf149bf0460c66cc276f135e169e9ac7b7a24d341d54"} err="failed to get container status \"77d66846e76ebc61324eaf149bf0460c66cc276f135e169e9ac7b7a24d341d54\": rpc error: code = NotFound desc = could not find container \"77d66846e76ebc61324eaf149bf0460c66cc276f135e169e9ac7b7a24d341d54\": container with ID starting with 77d66846e76ebc61324eaf149bf0460c66cc276f135e169e9ac7b7a24d341d54 not found: ID does not exist" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.135323 4482 scope.go:117] "RemoveContainer" containerID="232b261fffae7b7d78b0347c5263d5cfed94460c9592117c8df93d252e467d0d" Nov 25 08:08:26 crc kubenswrapper[4482]: E1125 08:08:26.135920 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"232b261fffae7b7d78b0347c5263d5cfed94460c9592117c8df93d252e467d0d\": container with ID starting with 232b261fffae7b7d78b0347c5263d5cfed94460c9592117c8df93d252e467d0d not found: ID does not exist" containerID="232b261fffae7b7d78b0347c5263d5cfed94460c9592117c8df93d252e467d0d" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.135944 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232b261fffae7b7d78b0347c5263d5cfed94460c9592117c8df93d252e467d0d"} err="failed to get container status \"232b261fffae7b7d78b0347c5263d5cfed94460c9592117c8df93d252e467d0d\": rpc error: code = NotFound desc = could not find container \"232b261fffae7b7d78b0347c5263d5cfed94460c9592117c8df93d252e467d0d\": container with ID starting with 232b261fffae7b7d78b0347c5263d5cfed94460c9592117c8df93d252e467d0d not found: ID does not exist" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.136294 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4225aa99-466b-4b5f-9138-ff1ae17a5b15-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.141095 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4225aa99-466b-4b5f-9138-ff1ae17a5b15-kube-api-access-fm6v8" (OuterVolumeSpecName: "kube-api-access-fm6v8") pod "4225aa99-466b-4b5f-9138-ff1ae17a5b15" (UID: "4225aa99-466b-4b5f-9138-ff1ae17a5b15"). InnerVolumeSpecName "kube-api-access-fm6v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.170099 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4225aa99-466b-4b5f-9138-ff1ae17a5b15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4225aa99-466b-4b5f-9138-ff1ae17a5b15" (UID: "4225aa99-466b-4b5f-9138-ff1ae17a5b15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.238682 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm6v8\" (UniqueName: \"kubernetes.io/projected/4225aa99-466b-4b5f-9138-ff1ae17a5b15-kube-api-access-fm6v8\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.238722 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4225aa99-466b-4b5f-9138-ff1ae17a5b15-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.391751 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4cdkd"] Nov 25 08:08:26 crc kubenswrapper[4482]: I1125 08:08:26.401154 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4cdkd"] Nov 25 08:08:27 crc kubenswrapper[4482]: I1125 08:08:27.066404 4482 generic.go:334] "Generic (PLEG): container finished" podID="633da329-9386-4344-a6a9-5538401ca23a" containerID="5dfd27e0e63aa1088065c51288a80bff6fbb80dfac59fd1bd417b9816c834ea3" exitCode=0 Nov 25 08:08:27 crc kubenswrapper[4482]: I1125 08:08:27.067324 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zrgg" event={"ID":"633da329-9386-4344-a6a9-5538401ca23a","Type":"ContainerDied","Data":"5dfd27e0e63aa1088065c51288a80bff6fbb80dfac59fd1bd417b9816c834ea3"} Nov 25 08:08:27 crc kubenswrapper[4482]: I1125 08:08:27.840575 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4225aa99-466b-4b5f-9138-ff1ae17a5b15" path="/var/lib/kubelet/pods/4225aa99-466b-4b5f-9138-ff1ae17a5b15/volumes" Nov 25 08:08:28 crc kubenswrapper[4482]: I1125 08:08:28.077963 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zrgg" event={"ID":"633da329-9386-4344-a6a9-5538401ca23a","Type":"ContainerStarted","Data":"a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9"} Nov 25 08:08:28 crc kubenswrapper[4482]: I1125 08:08:28.103654 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5zrgg" podStartSLOduration=3.530920468 podStartE2EDuration="6.103632038s" podCreationTimestamp="2025-11-25 08:08:22 +0000 UTC" firstStartedPulling="2025-11-25 08:08:25.037179609 +0000 UTC m=+4879.525410868" lastFinishedPulling="2025-11-25 08:08:27.609891178 +0000 UTC m=+4882.098122438" observedRunningTime="2025-11-25 08:08:28.10149492 +0000 UTC m=+4882.589726168" watchObservedRunningTime="2025-11-25 08:08:28.103632038 +0000 UTC m=+4882.591863297" Nov 25 08:08:33 crc kubenswrapper[4482]: I1125 08:08:33.447019 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:33 crc kubenswrapper[4482]: I1125 08:08:33.447436 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:33 crc kubenswrapper[4482]: I1125 08:08:33.495149 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:34 crc kubenswrapper[4482]: I1125 08:08:34.180772 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:36 crc kubenswrapper[4482]: I1125 08:08:36.093878 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5zrgg"] Nov 25 08:08:36 crc kubenswrapper[4482]: I1125 08:08:36.157859 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5zrgg" podUID="633da329-9386-4344-a6a9-5538401ca23a" containerName="registry-server" containerID="cri-o://a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9" gracePeriod=2 Nov 25 08:08:36 crc kubenswrapper[4482]: I1125 08:08:36.564054 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:36 crc kubenswrapper[4482]: I1125 08:08:36.755217 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633da329-9386-4344-a6a9-5538401ca23a-catalog-content\") pod \"633da329-9386-4344-a6a9-5538401ca23a\" (UID: \"633da329-9386-4344-a6a9-5538401ca23a\") " Nov 25 08:08:36 crc kubenswrapper[4482]: I1125 08:08:36.755272 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxx8l\" (UniqueName: \"kubernetes.io/projected/633da329-9386-4344-a6a9-5538401ca23a-kube-api-access-lxx8l\") pod \"633da329-9386-4344-a6a9-5538401ca23a\" (UID: \"633da329-9386-4344-a6a9-5538401ca23a\") " Nov 25 08:08:36 crc kubenswrapper[4482]: I1125 08:08:36.755465 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633da329-9386-4344-a6a9-5538401ca23a-utilities\") pod \"633da329-9386-4344-a6a9-5538401ca23a\" (UID: \"633da329-9386-4344-a6a9-5538401ca23a\") " Nov 25 08:08:36 crc kubenswrapper[4482]: I1125 08:08:36.756623 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/633da329-9386-4344-a6a9-5538401ca23a-utilities" (OuterVolumeSpecName: "utilities") pod "633da329-9386-4344-a6a9-5538401ca23a" (UID: "633da329-9386-4344-a6a9-5538401ca23a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:08:36 crc kubenswrapper[4482]: I1125 08:08:36.766146 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633da329-9386-4344-a6a9-5538401ca23a-kube-api-access-lxx8l" (OuterVolumeSpecName: "kube-api-access-lxx8l") pod "633da329-9386-4344-a6a9-5538401ca23a" (UID: "633da329-9386-4344-a6a9-5538401ca23a"). InnerVolumeSpecName "kube-api-access-lxx8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:08:36 crc kubenswrapper[4482]: I1125 08:08:36.795716 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/633da329-9386-4344-a6a9-5538401ca23a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "633da329-9386-4344-a6a9-5538401ca23a" (UID: "633da329-9386-4344-a6a9-5538401ca23a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:08:36 crc kubenswrapper[4482]: I1125 08:08:36.858481 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/633da329-9386-4344-a6a9-5538401ca23a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:36 crc kubenswrapper[4482]: I1125 08:08:36.858509 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxx8l\" (UniqueName: \"kubernetes.io/projected/633da329-9386-4344-a6a9-5538401ca23a-kube-api-access-lxx8l\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:36 crc kubenswrapper[4482]: I1125 08:08:36.858523 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/633da329-9386-4344-a6a9-5538401ca23a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.167376 4482 generic.go:334] "Generic (PLEG): container finished" podID="633da329-9386-4344-a6a9-5538401ca23a" containerID="a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9" exitCode=0 Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.167417 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zrgg" event={"ID":"633da329-9386-4344-a6a9-5538401ca23a","Type":"ContainerDied","Data":"a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9"} Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.167684 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zrgg" event={"ID":"633da329-9386-4344-a6a9-5538401ca23a","Type":"ContainerDied","Data":"8e1496d218be77a86ca15b1209d1b1a4484bb445a0ba524e176a40f81d87e9b7"} Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.167712 4482 scope.go:117] "RemoveContainer" containerID="a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9" Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.167449 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zrgg" Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.191306 4482 scope.go:117] "RemoveContainer" containerID="5dfd27e0e63aa1088065c51288a80bff6fbb80dfac59fd1bd417b9816c834ea3" Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.203202 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5zrgg"] Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.210499 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5zrgg"] Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.219658 4482 scope.go:117] "RemoveContainer" containerID="e60745c7a25a01910deab3570c3eb4df917b17071f55a9938b04294c1650e5a8" Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.252876 4482 scope.go:117] "RemoveContainer" containerID="a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9" Nov 25 08:08:37 crc kubenswrapper[4482]: E1125 08:08:37.253165 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9\": container with ID starting with a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9 not found: ID does not exist" containerID="a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9" Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.253208 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9"} err="failed to get container status \"a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9\": rpc error: code = NotFound desc = could not find container \"a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9\": container with ID starting with a2698be4b8b36b21197d0eeefa34ea27e30ad4802e8c6e9147092d8c5ddaa7e9 not found: ID does not exist" Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.253228 4482 scope.go:117] "RemoveContainer" containerID="5dfd27e0e63aa1088065c51288a80bff6fbb80dfac59fd1bd417b9816c834ea3" Nov 25 08:08:37 crc kubenswrapper[4482]: E1125 08:08:37.253600 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dfd27e0e63aa1088065c51288a80bff6fbb80dfac59fd1bd417b9816c834ea3\": container with ID starting with 5dfd27e0e63aa1088065c51288a80bff6fbb80dfac59fd1bd417b9816c834ea3 not found: ID does not exist" containerID="5dfd27e0e63aa1088065c51288a80bff6fbb80dfac59fd1bd417b9816c834ea3" Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.253700 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dfd27e0e63aa1088065c51288a80bff6fbb80dfac59fd1bd417b9816c834ea3"} err="failed to get container status \"5dfd27e0e63aa1088065c51288a80bff6fbb80dfac59fd1bd417b9816c834ea3\": rpc error: code = NotFound desc = could not find container \"5dfd27e0e63aa1088065c51288a80bff6fbb80dfac59fd1bd417b9816c834ea3\": container with ID starting with 5dfd27e0e63aa1088065c51288a80bff6fbb80dfac59fd1bd417b9816c834ea3 not found: ID does not exist" Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.253774 4482 scope.go:117] "RemoveContainer" containerID="e60745c7a25a01910deab3570c3eb4df917b17071f55a9938b04294c1650e5a8" Nov 25 08:08:37 crc kubenswrapper[4482]: E1125 08:08:37.254195 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e60745c7a25a01910deab3570c3eb4df917b17071f55a9938b04294c1650e5a8\": container with ID starting with e60745c7a25a01910deab3570c3eb4df917b17071f55a9938b04294c1650e5a8 not found: ID does not exist" containerID="e60745c7a25a01910deab3570c3eb4df917b17071f55a9938b04294c1650e5a8" Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.254242 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e60745c7a25a01910deab3570c3eb4df917b17071f55a9938b04294c1650e5a8"} err="failed to get container status \"e60745c7a25a01910deab3570c3eb4df917b17071f55a9938b04294c1650e5a8\": rpc error: code = NotFound desc = could not find container \"e60745c7a25a01910deab3570c3eb4df917b17071f55a9938b04294c1650e5a8\": container with ID starting with e60745c7a25a01910deab3570c3eb4df917b17071f55a9938b04294c1650e5a8 not found: ID does not exist" Nov 25 08:08:37 crc kubenswrapper[4482]: I1125 08:08:37.841725 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="633da329-9386-4344-a6a9-5538401ca23a" path="/var/lib/kubelet/pods/633da329-9386-4344-a6a9-5538401ca23a/volumes" Nov 25 08:08:39 crc kubenswrapper[4482]: I1125 08:08:39.117508 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:08:39 crc kubenswrapper[4482]: I1125 08:08:39.117570 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:08:53 crc kubenswrapper[4482]: I1125 08:08:53.248452 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:08:53 crc kubenswrapper[4482]: I1125 08:08:53.311484 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-656dff569f-qv7tq"] Nov 25 08:08:53 crc kubenswrapper[4482]: I1125 08:08:53.311824 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-656dff569f-qv7tq" podUID="22d88363-431c-4a28-818e-f200d37d64b5" containerName="neutron-api" containerID="cri-o://ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338" gracePeriod=30 Nov 25 08:08:53 crc kubenswrapper[4482]: I1125 08:08:53.312477 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-656dff569f-qv7tq" podUID="22d88363-431c-4a28-818e-f200d37d64b5" containerName="neutron-httpd" containerID="cri-o://fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a" gracePeriod=30 Nov 25 08:08:54 crc kubenswrapper[4482]: I1125 08:08:54.328839 4482 generic.go:334] "Generic (PLEG): container finished" podID="22d88363-431c-4a28-818e-f200d37d64b5" containerID="fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a" exitCode=0 Nov 25 08:08:54 crc kubenswrapper[4482]: I1125 08:08:54.329049 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-656dff569f-qv7tq" event={"ID":"22d88363-431c-4a28-818e-f200d37d64b5","Type":"ContainerDied","Data":"fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a"} Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.277074 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-656dff569f-qv7tq" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.391191 4482 generic.go:334] "Generic (PLEG): container finished" podID="22d88363-431c-4a28-818e-f200d37d64b5" containerID="ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338" exitCode=0 Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.391268 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-656dff569f-qv7tq" event={"ID":"22d88363-431c-4a28-818e-f200d37d64b5","Type":"ContainerDied","Data":"ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338"} Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.391333 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-656dff569f-qv7tq" event={"ID":"22d88363-431c-4a28-818e-f200d37d64b5","Type":"ContainerDied","Data":"00392a0ad2bf729aac1e206b36eba0d86dfc42fe3fce74b8bc9caf4102ebf78a"} Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.391356 4482 scope.go:117] "RemoveContainer" containerID="fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.391615 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-656dff569f-qv7tq" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.419360 4482 scope.go:117] "RemoveContainer" containerID="ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.443283 4482 scope.go:117] "RemoveContainer" containerID="fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a" Nov 25 08:08:59 crc kubenswrapper[4482]: E1125 08:08:59.444002 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a\": container with ID starting with fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a not found: ID does not exist" containerID="fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.444120 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a"} err="failed to get container status \"fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a\": rpc error: code = NotFound desc = could not find container \"fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a\": container with ID starting with fe51728de760837a078b8a05ca66fcfe6da4809abf66d4e5b3af011b979e5c8a not found: ID does not exist" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.444240 4482 scope.go:117] "RemoveContainer" containerID="ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338" Nov 25 08:08:59 crc kubenswrapper[4482]: E1125 08:08:59.444530 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338\": container with ID starting with ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338 not found: ID does not exist" containerID="ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.444552 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338"} err="failed to get container status \"ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338\": rpc error: code = NotFound desc = could not find container \"ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338\": container with ID starting with ed6faa351c929fc918d6a73844719d8ef5abff25a831b2383f25c3a2c4b7b338 not found: ID does not exist" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.478494 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-httpd-config\") pod \"22d88363-431c-4a28-818e-f200d37d64b5\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.478668 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-public-tls-certs\") pod \"22d88363-431c-4a28-818e-f200d37d64b5\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.478772 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-config\") pod \"22d88363-431c-4a28-818e-f200d37d64b5\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.478805 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz5ww\" (UniqueName: \"kubernetes.io/projected/22d88363-431c-4a28-818e-f200d37d64b5-kube-api-access-lz5ww\") pod \"22d88363-431c-4a28-818e-f200d37d64b5\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.478875 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-combined-ca-bundle\") pod \"22d88363-431c-4a28-818e-f200d37d64b5\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.479001 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-internal-tls-certs\") pod \"22d88363-431c-4a28-818e-f200d37d64b5\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.479058 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-ovndb-tls-certs\") pod \"22d88363-431c-4a28-818e-f200d37d64b5\" (UID: \"22d88363-431c-4a28-818e-f200d37d64b5\") " Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.487179 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d88363-431c-4a28-818e-f200d37d64b5-kube-api-access-lz5ww" (OuterVolumeSpecName: "kube-api-access-lz5ww") pod "22d88363-431c-4a28-818e-f200d37d64b5" (UID: "22d88363-431c-4a28-818e-f200d37d64b5"). InnerVolumeSpecName "kube-api-access-lz5ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.487893 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "22d88363-431c-4a28-818e-f200d37d64b5" (UID: "22d88363-431c-4a28-818e-f200d37d64b5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.523327 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "22d88363-431c-4a28-818e-f200d37d64b5" (UID: "22d88363-431c-4a28-818e-f200d37d64b5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.528548 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22d88363-431c-4a28-818e-f200d37d64b5" (UID: "22d88363-431c-4a28-818e-f200d37d64b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.529926 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "22d88363-431c-4a28-818e-f200d37d64b5" (UID: "22d88363-431c-4a28-818e-f200d37d64b5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.547911 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-config" (OuterVolumeSpecName: "config") pod "22d88363-431c-4a28-818e-f200d37d64b5" (UID: "22d88363-431c-4a28-818e-f200d37d64b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.557637 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "22d88363-431c-4a28-818e-f200d37d64b5" (UID: "22d88363-431c-4a28-818e-f200d37d64b5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.583679 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.583708 4482 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.583720 4482 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.583736 4482 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.583746 4482 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.583756 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/22d88363-431c-4a28-818e-f200d37d64b5-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.583767 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz5ww\" (UniqueName: \"kubernetes.io/projected/22d88363-431c-4a28-818e-f200d37d64b5-kube-api-access-lz5ww\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.727203 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-656dff569f-qv7tq"] Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.735493 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-656dff569f-qv7tq"] Nov 25 08:08:59 crc kubenswrapper[4482]: I1125 08:08:59.841348 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d88363-431c-4a28-818e-f200d37d64b5" path="/var/lib/kubelet/pods/22d88363-431c-4a28-818e-f200d37d64b5/volumes" Nov 25 08:09:09 crc kubenswrapper[4482]: I1125 08:09:09.117356 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:09:09 crc kubenswrapper[4482]: I1125 08:09:09.118888 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:09:24 crc kubenswrapper[4482]: E1125 08:09:24.767477 4482 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.26.133:51340->192.168.26.133:42749: write tcp 192.168.26.133:51340->192.168.26.133:42749: write: connection reset by peer Nov 25 08:09:39 crc kubenswrapper[4482]: I1125 08:09:39.117882 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:09:39 crc kubenswrapper[4482]: I1125 08:09:39.118583 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:09:39 crc kubenswrapper[4482]: I1125 08:09:39.118673 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 08:09:39 crc kubenswrapper[4482]: I1125 08:09:39.119291 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:09:39 crc kubenswrapper[4482]: I1125 08:09:39.119339 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" gracePeriod=600 Nov 25 08:09:39 crc kubenswrapper[4482]: E1125 08:09:39.241247 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:09:39 crc kubenswrapper[4482]: I1125 08:09:39.804071 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" exitCode=0 Nov 25 08:09:39 crc kubenswrapper[4482]: I1125 08:09:39.804115 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab"} Nov 25 08:09:39 crc kubenswrapper[4482]: I1125 08:09:39.804148 4482 scope.go:117] "RemoveContainer" containerID="b771395cc7f74353b9f17d150b95c27d8459871c82cb05d24be7ce86de7b60a2" Nov 25 08:09:39 crc kubenswrapper[4482]: I1125 08:09:39.804814 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:09:39 crc kubenswrapper[4482]: E1125 08:09:39.805048 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:09:51 crc kubenswrapper[4482]: I1125 08:09:51.830560 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:09:51 crc kubenswrapper[4482]: E1125 08:09:51.831519 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:10:06 crc kubenswrapper[4482]: I1125 08:10:06.831286 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:10:06 crc kubenswrapper[4482]: E1125 08:10:06.833247 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:10:21 crc kubenswrapper[4482]: I1125 08:10:21.831935 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:10:21 crc kubenswrapper[4482]: E1125 08:10:21.833007 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:10:34 crc kubenswrapper[4482]: I1125 08:10:34.830865 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:10:34 crc kubenswrapper[4482]: E1125 08:10:34.831633 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:10:49 crc kubenswrapper[4482]: I1125 08:10:49.830602 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:10:49 crc kubenswrapper[4482]: E1125 08:10:49.831712 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:11:00 crc kubenswrapper[4482]: I1125 08:11:00.831340 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:11:00 crc kubenswrapper[4482]: E1125 08:11:00.831974 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:11:13 crc kubenswrapper[4482]: I1125 08:11:13.831203 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:11:13 crc kubenswrapper[4482]: E1125 08:11:13.831800 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:11:24 crc kubenswrapper[4482]: I1125 08:11:24.831103 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:11:24 crc kubenswrapper[4482]: E1125 08:11:24.831751 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:11:35 crc kubenswrapper[4482]: I1125 08:11:35.835633 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:11:35 crc kubenswrapper[4482]: E1125 08:11:35.836227 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:11:50 crc kubenswrapper[4482]: I1125 08:11:50.830349 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:11:50 crc kubenswrapper[4482]: E1125 08:11:50.830885 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:12:03 crc kubenswrapper[4482]: I1125 08:12:03.830576 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:12:03 crc kubenswrapper[4482]: E1125 08:12:03.831359 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:12:14 crc kubenswrapper[4482]: I1125 08:12:14.831161 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:12:14 crc kubenswrapper[4482]: E1125 08:12:14.832418 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:12:27 crc kubenswrapper[4482]: I1125 08:12:27.831003 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:12:27 crc kubenswrapper[4482]: E1125 08:12:27.831740 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:12:41 crc kubenswrapper[4482]: I1125 08:12:41.830824 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:12:41 crc kubenswrapper[4482]: E1125 08:12:41.831677 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:12:55 crc kubenswrapper[4482]: I1125 08:12:55.838850 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:12:55 crc kubenswrapper[4482]: E1125 08:12:55.839689 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:13:07 crc kubenswrapper[4482]: I1125 08:13:07.830688 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:13:07 crc kubenswrapper[4482]: E1125 08:13:07.831413 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:13:21 crc kubenswrapper[4482]: I1125 08:13:21.830190 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:13:21 crc kubenswrapper[4482]: E1125 08:13:21.830803 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:13:35 crc kubenswrapper[4482]: I1125 08:13:35.836397 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:13:35 crc kubenswrapper[4482]: E1125 08:13:35.836967 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:13:49 crc kubenswrapper[4482]: I1125 08:13:49.830480 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:13:49 crc kubenswrapper[4482]: E1125 08:13:49.830986 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.401234 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zrzdl"] Nov 25 08:14:03 crc kubenswrapper[4482]: E1125 08:14:03.401924 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d88363-431c-4a28-818e-f200d37d64b5" containerName="neutron-api" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.401936 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d88363-431c-4a28-818e-f200d37d64b5" containerName="neutron-api" Nov 25 08:14:03 crc kubenswrapper[4482]: E1125 08:14:03.401954 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d88363-431c-4a28-818e-f200d37d64b5" containerName="neutron-httpd" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.401960 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d88363-431c-4a28-818e-f200d37d64b5" containerName="neutron-httpd" Nov 25 08:14:03 crc kubenswrapper[4482]: E1125 08:14:03.401984 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633da329-9386-4344-a6a9-5538401ca23a" containerName="registry-server" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.401989 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="633da329-9386-4344-a6a9-5538401ca23a" containerName="registry-server" Nov 25 08:14:03 crc kubenswrapper[4482]: E1125 08:14:03.402000 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633da329-9386-4344-a6a9-5538401ca23a" containerName="extract-content" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.402005 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="633da329-9386-4344-a6a9-5538401ca23a" containerName="extract-content" Nov 25 08:14:03 crc kubenswrapper[4482]: E1125 08:14:03.402017 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4225aa99-466b-4b5f-9138-ff1ae17a5b15" containerName="registry-server" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.402023 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="4225aa99-466b-4b5f-9138-ff1ae17a5b15" containerName="registry-server" Nov 25 08:14:03 crc kubenswrapper[4482]: E1125 08:14:03.402034 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4225aa99-466b-4b5f-9138-ff1ae17a5b15" containerName="extract-content" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.402044 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="4225aa99-466b-4b5f-9138-ff1ae17a5b15" containerName="extract-content" Nov 25 08:14:03 crc kubenswrapper[4482]: E1125 08:14:03.402051 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633da329-9386-4344-a6a9-5538401ca23a" containerName="extract-utilities" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.402056 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="633da329-9386-4344-a6a9-5538401ca23a" containerName="extract-utilities" Nov 25 08:14:03 crc kubenswrapper[4482]: E1125 08:14:03.402067 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4225aa99-466b-4b5f-9138-ff1ae17a5b15" containerName="extract-utilities" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.402072 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="4225aa99-466b-4b5f-9138-ff1ae17a5b15" containerName="extract-utilities" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.402286 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="4225aa99-466b-4b5f-9138-ff1ae17a5b15" containerName="registry-server" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.402304 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="633da329-9386-4344-a6a9-5538401ca23a" containerName="registry-server" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.402316 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d88363-431c-4a28-818e-f200d37d64b5" containerName="neutron-httpd" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.402326 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d88363-431c-4a28-818e-f200d37d64b5" containerName="neutron-api" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.403519 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.431438 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrzdl"] Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.482921 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7jjl\" (UniqueName: \"kubernetes.io/projected/f0936027-3acb-4204-8e9e-48e7519a953d-kube-api-access-m7jjl\") pod \"redhat-operators-zrzdl\" (UID: \"f0936027-3acb-4204-8e9e-48e7519a953d\") " pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.483064 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0936027-3acb-4204-8e9e-48e7519a953d-utilities\") pod \"redhat-operators-zrzdl\" (UID: \"f0936027-3acb-4204-8e9e-48e7519a953d\") " pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.483123 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0936027-3acb-4204-8e9e-48e7519a953d-catalog-content\") pod \"redhat-operators-zrzdl\" (UID: \"f0936027-3acb-4204-8e9e-48e7519a953d\") " pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.584249 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0936027-3acb-4204-8e9e-48e7519a953d-utilities\") pod \"redhat-operators-zrzdl\" (UID: \"f0936027-3acb-4204-8e9e-48e7519a953d\") " pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.584308 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0936027-3acb-4204-8e9e-48e7519a953d-catalog-content\") pod \"redhat-operators-zrzdl\" (UID: \"f0936027-3acb-4204-8e9e-48e7519a953d\") " pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.584368 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7jjl\" (UniqueName: \"kubernetes.io/projected/f0936027-3acb-4204-8e9e-48e7519a953d-kube-api-access-m7jjl\") pod \"redhat-operators-zrzdl\" (UID: \"f0936027-3acb-4204-8e9e-48e7519a953d\") " pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.584720 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0936027-3acb-4204-8e9e-48e7519a953d-utilities\") pod \"redhat-operators-zrzdl\" (UID: \"f0936027-3acb-4204-8e9e-48e7519a953d\") " pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.584749 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0936027-3acb-4204-8e9e-48e7519a953d-catalog-content\") pod \"redhat-operators-zrzdl\" (UID: \"f0936027-3acb-4204-8e9e-48e7519a953d\") " pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.603969 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7jjl\" (UniqueName: \"kubernetes.io/projected/f0936027-3acb-4204-8e9e-48e7519a953d-kube-api-access-m7jjl\") pod \"redhat-operators-zrzdl\" (UID: \"f0936027-3acb-4204-8e9e-48e7519a953d\") " pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.720932 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:03 crc kubenswrapper[4482]: I1125 08:14:03.831906 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:14:03 crc kubenswrapper[4482]: E1125 08:14:03.833092 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:14:04 crc kubenswrapper[4482]: I1125 08:14:04.151087 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrzdl"] Nov 25 08:14:04 crc kubenswrapper[4482]: I1125 08:14:04.877426 4482 generic.go:334] "Generic (PLEG): container finished" podID="f0936027-3acb-4204-8e9e-48e7519a953d" containerID="fa0970063a047d7d4377a93e3c1fc9c7f2883c2a6364411ef0724049ca48a526" exitCode=0 Nov 25 08:14:04 crc kubenswrapper[4482]: I1125 08:14:04.877523 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrzdl" event={"ID":"f0936027-3acb-4204-8e9e-48e7519a953d","Type":"ContainerDied","Data":"fa0970063a047d7d4377a93e3c1fc9c7f2883c2a6364411ef0724049ca48a526"} Nov 25 08:14:04 crc kubenswrapper[4482]: I1125 08:14:04.877643 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrzdl" event={"ID":"f0936027-3acb-4204-8e9e-48e7519a953d","Type":"ContainerStarted","Data":"d98535cee46362d9674f7f3add6b7b7b8fce192bcba2021d11e5f5e9fdc4c257"} Nov 25 08:14:04 crc kubenswrapper[4482]: I1125 08:14:04.878957 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:14:05 crc kubenswrapper[4482]: I1125 08:14:05.888322 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrzdl" event={"ID":"f0936027-3acb-4204-8e9e-48e7519a953d","Type":"ContainerStarted","Data":"b49ace0daa5972c03b2f55f68af8592fdc7d7f0c3d9381f5d38dd781ba11fa60"} Nov 25 08:14:08 crc kubenswrapper[4482]: I1125 08:14:08.917128 4482 generic.go:334] "Generic (PLEG): container finished" podID="f0936027-3acb-4204-8e9e-48e7519a953d" containerID="b49ace0daa5972c03b2f55f68af8592fdc7d7f0c3d9381f5d38dd781ba11fa60" exitCode=0 Nov 25 08:14:08 crc kubenswrapper[4482]: I1125 08:14:08.917192 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrzdl" event={"ID":"f0936027-3acb-4204-8e9e-48e7519a953d","Type":"ContainerDied","Data":"b49ace0daa5972c03b2f55f68af8592fdc7d7f0c3d9381f5d38dd781ba11fa60"} Nov 25 08:14:09 crc kubenswrapper[4482]: I1125 08:14:09.936484 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrzdl" event={"ID":"f0936027-3acb-4204-8e9e-48e7519a953d","Type":"ContainerStarted","Data":"bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3"} Nov 25 08:14:13 crc kubenswrapper[4482]: I1125 08:14:13.721547 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:13 crc kubenswrapper[4482]: I1125 08:14:13.722180 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:14 crc kubenswrapper[4482]: I1125 08:14:14.763774 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zrzdl" podUID="f0936027-3acb-4204-8e9e-48e7519a953d" containerName="registry-server" probeResult="failure" output=< Nov 25 08:14:14 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 08:14:14 crc kubenswrapper[4482]: > Nov 25 08:14:14 crc kubenswrapper[4482]: I1125 08:14:14.832654 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:14:14 crc kubenswrapper[4482]: E1125 08:14:14.833250 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:14:23 crc kubenswrapper[4482]: I1125 08:14:23.754962 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:23 crc kubenswrapper[4482]: I1125 08:14:23.771179 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zrzdl" podStartSLOduration=16.151181381 podStartE2EDuration="20.771142041s" podCreationTimestamp="2025-11-25 08:14:03 +0000 UTC" firstStartedPulling="2025-11-25 08:14:04.878726592 +0000 UTC m=+5219.366957851" lastFinishedPulling="2025-11-25 08:14:09.498687252 +0000 UTC m=+5223.986918511" observedRunningTime="2025-11-25 08:14:09.964312402 +0000 UTC m=+5224.452543661" watchObservedRunningTime="2025-11-25 08:14:23.771142041 +0000 UTC m=+5238.259373300" Nov 25 08:14:23 crc kubenswrapper[4482]: I1125 08:14:23.792750 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:23 crc kubenswrapper[4482]: I1125 08:14:23.987313 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrzdl"] Nov 25 08:14:25 crc kubenswrapper[4482]: I1125 08:14:25.048782 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zrzdl" podUID="f0936027-3acb-4204-8e9e-48e7519a953d" containerName="registry-server" containerID="cri-o://bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3" gracePeriod=2 Nov 25 08:14:25 crc kubenswrapper[4482]: I1125 08:14:25.418464 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:25 crc kubenswrapper[4482]: I1125 08:14:25.587474 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0936027-3acb-4204-8e9e-48e7519a953d-utilities\") pod \"f0936027-3acb-4204-8e9e-48e7519a953d\" (UID: \"f0936027-3acb-4204-8e9e-48e7519a953d\") " Nov 25 08:14:25 crc kubenswrapper[4482]: I1125 08:14:25.587878 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7jjl\" (UniqueName: \"kubernetes.io/projected/f0936027-3acb-4204-8e9e-48e7519a953d-kube-api-access-m7jjl\") pod \"f0936027-3acb-4204-8e9e-48e7519a953d\" (UID: \"f0936027-3acb-4204-8e9e-48e7519a953d\") " Nov 25 08:14:25 crc kubenswrapper[4482]: I1125 08:14:25.588009 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0936027-3acb-4204-8e9e-48e7519a953d-catalog-content\") pod \"f0936027-3acb-4204-8e9e-48e7519a953d\" (UID: \"f0936027-3acb-4204-8e9e-48e7519a953d\") " Nov 25 08:14:25 crc kubenswrapper[4482]: I1125 08:14:25.588125 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0936027-3acb-4204-8e9e-48e7519a953d-utilities" (OuterVolumeSpecName: "utilities") pod "f0936027-3acb-4204-8e9e-48e7519a953d" (UID: "f0936027-3acb-4204-8e9e-48e7519a953d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:14:25 crc kubenswrapper[4482]: I1125 08:14:25.588520 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0936027-3acb-4204-8e9e-48e7519a953d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:25 crc kubenswrapper[4482]: I1125 08:14:25.592529 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0936027-3acb-4204-8e9e-48e7519a953d-kube-api-access-m7jjl" (OuterVolumeSpecName: "kube-api-access-m7jjl") pod "f0936027-3acb-4204-8e9e-48e7519a953d" (UID: "f0936027-3acb-4204-8e9e-48e7519a953d"). InnerVolumeSpecName "kube-api-access-m7jjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:14:25 crc kubenswrapper[4482]: I1125 08:14:25.656817 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0936027-3acb-4204-8e9e-48e7519a953d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0936027-3acb-4204-8e9e-48e7519a953d" (UID: "f0936027-3acb-4204-8e9e-48e7519a953d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:14:25 crc kubenswrapper[4482]: I1125 08:14:25.690809 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0936027-3acb-4204-8e9e-48e7519a953d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:25 crc kubenswrapper[4482]: I1125 08:14:25.690833 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7jjl\" (UniqueName: \"kubernetes.io/projected/f0936027-3acb-4204-8e9e-48e7519a953d-kube-api-access-m7jjl\") on node \"crc\" DevicePath \"\"" Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.056539 4482 generic.go:334] "Generic (PLEG): container finished" podID="f0936027-3acb-4204-8e9e-48e7519a953d" containerID="bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3" exitCode=0 Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.056589 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrzdl" event={"ID":"f0936027-3acb-4204-8e9e-48e7519a953d","Type":"ContainerDied","Data":"bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3"} Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.056637 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrzdl" event={"ID":"f0936027-3acb-4204-8e9e-48e7519a953d","Type":"ContainerDied","Data":"d98535cee46362d9674f7f3add6b7b7b8fce192bcba2021d11e5f5e9fdc4c257"} Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.056656 4482 scope.go:117] "RemoveContainer" containerID="bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3" Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.057249 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrzdl" Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.076927 4482 scope.go:117] "RemoveContainer" containerID="b49ace0daa5972c03b2f55f68af8592fdc7d7f0c3d9381f5d38dd781ba11fa60" Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.079126 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrzdl"] Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.085892 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zrzdl"] Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.092974 4482 scope.go:117] "RemoveContainer" containerID="fa0970063a047d7d4377a93e3c1fc9c7f2883c2a6364411ef0724049ca48a526" Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.124099 4482 scope.go:117] "RemoveContainer" containerID="bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3" Nov 25 08:14:26 crc kubenswrapper[4482]: E1125 08:14:26.124461 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3\": container with ID starting with bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3 not found: ID does not exist" containerID="bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3" Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.124492 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3"} err="failed to get container status \"bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3\": rpc error: code = NotFound desc = could not find container \"bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3\": container with ID starting with bfecbfb33e5e712d1470357f07c4b5fcb138636847d7de2804fb8f5b5acba1b3 not found: ID does not exist" Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.124513 4482 scope.go:117] "RemoveContainer" containerID="b49ace0daa5972c03b2f55f68af8592fdc7d7f0c3d9381f5d38dd781ba11fa60" Nov 25 08:14:26 crc kubenswrapper[4482]: E1125 08:14:26.124781 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b49ace0daa5972c03b2f55f68af8592fdc7d7f0c3d9381f5d38dd781ba11fa60\": container with ID starting with b49ace0daa5972c03b2f55f68af8592fdc7d7f0c3d9381f5d38dd781ba11fa60 not found: ID does not exist" containerID="b49ace0daa5972c03b2f55f68af8592fdc7d7f0c3d9381f5d38dd781ba11fa60" Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.124813 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b49ace0daa5972c03b2f55f68af8592fdc7d7f0c3d9381f5d38dd781ba11fa60"} err="failed to get container status \"b49ace0daa5972c03b2f55f68af8592fdc7d7f0c3d9381f5d38dd781ba11fa60\": rpc error: code = NotFound desc = could not find container \"b49ace0daa5972c03b2f55f68af8592fdc7d7f0c3d9381f5d38dd781ba11fa60\": container with ID starting with b49ace0daa5972c03b2f55f68af8592fdc7d7f0c3d9381f5d38dd781ba11fa60 not found: ID does not exist" Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.124836 4482 scope.go:117] "RemoveContainer" containerID="fa0970063a047d7d4377a93e3c1fc9c7f2883c2a6364411ef0724049ca48a526" Nov 25 08:14:26 crc kubenswrapper[4482]: E1125 08:14:26.125192 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0970063a047d7d4377a93e3c1fc9c7f2883c2a6364411ef0724049ca48a526\": container with ID starting with fa0970063a047d7d4377a93e3c1fc9c7f2883c2a6364411ef0724049ca48a526 not found: ID does not exist" containerID="fa0970063a047d7d4377a93e3c1fc9c7f2883c2a6364411ef0724049ca48a526" Nov 25 08:14:26 crc kubenswrapper[4482]: I1125 08:14:26.125218 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0970063a047d7d4377a93e3c1fc9c7f2883c2a6364411ef0724049ca48a526"} err="failed to get container status \"fa0970063a047d7d4377a93e3c1fc9c7f2883c2a6364411ef0724049ca48a526\": rpc error: code = NotFound desc = could not find container \"fa0970063a047d7d4377a93e3c1fc9c7f2883c2a6364411ef0724049ca48a526\": container with ID starting with fa0970063a047d7d4377a93e3c1fc9c7f2883c2a6364411ef0724049ca48a526 not found: ID does not exist" Nov 25 08:14:27 crc kubenswrapper[4482]: I1125 08:14:27.838355 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0936027-3acb-4204-8e9e-48e7519a953d" path="/var/lib/kubelet/pods/f0936027-3acb-4204-8e9e-48e7519a953d/volumes" Nov 25 08:14:29 crc kubenswrapper[4482]: I1125 08:14:29.831337 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:14:29 crc kubenswrapper[4482]: E1125 08:14:29.831682 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:14:44 crc kubenswrapper[4482]: I1125 08:14:44.831070 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:14:45 crc kubenswrapper[4482]: I1125 08:14:45.183399 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"1308a74aee46cf01529f8cee096203479d5ba7a7a014bb93c1f657cd68cb879a"} Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.148640 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp"] Nov 25 08:15:00 crc kubenswrapper[4482]: E1125 08:15:00.149745 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0936027-3acb-4204-8e9e-48e7519a953d" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.149926 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0936027-3acb-4204-8e9e-48e7519a953d" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4482]: E1125 08:15:00.149960 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0936027-3acb-4204-8e9e-48e7519a953d" containerName="extract-utilities" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.149966 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0936027-3acb-4204-8e9e-48e7519a953d" containerName="extract-utilities" Nov 25 08:15:00 crc kubenswrapper[4482]: E1125 08:15:00.149994 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0936027-3acb-4204-8e9e-48e7519a953d" containerName="extract-content" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.150000 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0936027-3acb-4204-8e9e-48e7519a953d" containerName="extract-content" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.150281 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0936027-3acb-4204-8e9e-48e7519a953d" containerName="registry-server" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.151087 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.153521 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.153775 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.159863 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp"] Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.250250 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7sbh\" (UniqueName: \"kubernetes.io/projected/92afaaee-b11e-4bce-9967-673ca19b70f0-kube-api-access-r7sbh\") pod \"collect-profiles-29400975-4k4bp\" (UID: \"92afaaee-b11e-4bce-9967-673ca19b70f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.250742 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92afaaee-b11e-4bce-9967-673ca19b70f0-config-volume\") pod \"collect-profiles-29400975-4k4bp\" (UID: \"92afaaee-b11e-4bce-9967-673ca19b70f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.250890 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/92afaaee-b11e-4bce-9967-673ca19b70f0-secret-volume\") pod \"collect-profiles-29400975-4k4bp\" (UID: \"92afaaee-b11e-4bce-9967-673ca19b70f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.351860 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7sbh\" (UniqueName: \"kubernetes.io/projected/92afaaee-b11e-4bce-9967-673ca19b70f0-kube-api-access-r7sbh\") pod \"collect-profiles-29400975-4k4bp\" (UID: \"92afaaee-b11e-4bce-9967-673ca19b70f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.351940 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92afaaee-b11e-4bce-9967-673ca19b70f0-config-volume\") pod \"collect-profiles-29400975-4k4bp\" (UID: \"92afaaee-b11e-4bce-9967-673ca19b70f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.352018 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/92afaaee-b11e-4bce-9967-673ca19b70f0-secret-volume\") pod \"collect-profiles-29400975-4k4bp\" (UID: \"92afaaee-b11e-4bce-9967-673ca19b70f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.353208 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92afaaee-b11e-4bce-9967-673ca19b70f0-config-volume\") pod \"collect-profiles-29400975-4k4bp\" (UID: \"92afaaee-b11e-4bce-9967-673ca19b70f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.357335 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/92afaaee-b11e-4bce-9967-673ca19b70f0-secret-volume\") pod \"collect-profiles-29400975-4k4bp\" (UID: \"92afaaee-b11e-4bce-9967-673ca19b70f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.367883 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7sbh\" (UniqueName: \"kubernetes.io/projected/92afaaee-b11e-4bce-9967-673ca19b70f0-kube-api-access-r7sbh\") pod \"collect-profiles-29400975-4k4bp\" (UID: \"92afaaee-b11e-4bce-9967-673ca19b70f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.480310 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:00 crc kubenswrapper[4482]: I1125 08:15:00.873526 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp"] Nov 25 08:15:01 crc kubenswrapper[4482]: I1125 08:15:01.306378 4482 generic.go:334] "Generic (PLEG): container finished" podID="92afaaee-b11e-4bce-9967-673ca19b70f0" containerID="fd73fa1ef7e4667f049473daa082825beac83f6420f85f7078f9ad786d27c83c" exitCode=0 Nov 25 08:15:01 crc kubenswrapper[4482]: I1125 08:15:01.306425 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" event={"ID":"92afaaee-b11e-4bce-9967-673ca19b70f0","Type":"ContainerDied","Data":"fd73fa1ef7e4667f049473daa082825beac83f6420f85f7078f9ad786d27c83c"} Nov 25 08:15:01 crc kubenswrapper[4482]: I1125 08:15:01.306652 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" event={"ID":"92afaaee-b11e-4bce-9967-673ca19b70f0","Type":"ContainerStarted","Data":"dd19a637c0551b36f52a0893de3ff790fab1627c082a4f286a9c93b656c1b58f"} Nov 25 08:15:01 crc kubenswrapper[4482]: E1125 08:15:01.328595 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92afaaee_b11e_4bce_9967_673ca19b70f0.slice/crio-conmon-fd73fa1ef7e4667f049473daa082825beac83f6420f85f7078f9ad786d27c83c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92afaaee_b11e_4bce_9967_673ca19b70f0.slice/crio-fd73fa1ef7e4667f049473daa082825beac83f6420f85f7078f9ad786d27c83c.scope\": RecentStats: unable to find data in memory cache]" Nov 25 08:15:02 crc kubenswrapper[4482]: I1125 08:15:02.620847 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:02 crc kubenswrapper[4482]: I1125 08:15:02.794655 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92afaaee-b11e-4bce-9967-673ca19b70f0-config-volume\") pod \"92afaaee-b11e-4bce-9967-673ca19b70f0\" (UID: \"92afaaee-b11e-4bce-9967-673ca19b70f0\") " Nov 25 08:15:02 crc kubenswrapper[4482]: I1125 08:15:02.794730 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7sbh\" (UniqueName: \"kubernetes.io/projected/92afaaee-b11e-4bce-9967-673ca19b70f0-kube-api-access-r7sbh\") pod \"92afaaee-b11e-4bce-9967-673ca19b70f0\" (UID: \"92afaaee-b11e-4bce-9967-673ca19b70f0\") " Nov 25 08:15:02 crc kubenswrapper[4482]: I1125 08:15:02.794949 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/92afaaee-b11e-4bce-9967-673ca19b70f0-secret-volume\") pod \"92afaaee-b11e-4bce-9967-673ca19b70f0\" (UID: \"92afaaee-b11e-4bce-9967-673ca19b70f0\") " Nov 25 08:15:02 crc kubenswrapper[4482]: I1125 08:15:02.795432 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92afaaee-b11e-4bce-9967-673ca19b70f0-config-volume" (OuterVolumeSpecName: "config-volume") pod "92afaaee-b11e-4bce-9967-673ca19b70f0" (UID: "92afaaee-b11e-4bce-9967-673ca19b70f0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:15:02 crc kubenswrapper[4482]: I1125 08:15:02.795973 4482 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92afaaee-b11e-4bce-9967-673ca19b70f0-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4482]: I1125 08:15:02.800285 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92afaaee-b11e-4bce-9967-673ca19b70f0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "92afaaee-b11e-4bce-9967-673ca19b70f0" (UID: "92afaaee-b11e-4bce-9967-673ca19b70f0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:15:02 crc kubenswrapper[4482]: I1125 08:15:02.801133 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92afaaee-b11e-4bce-9967-673ca19b70f0-kube-api-access-r7sbh" (OuterVolumeSpecName: "kube-api-access-r7sbh") pod "92afaaee-b11e-4bce-9967-673ca19b70f0" (UID: "92afaaee-b11e-4bce-9967-673ca19b70f0"). InnerVolumeSpecName "kube-api-access-r7sbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:15:02 crc kubenswrapper[4482]: I1125 08:15:02.898581 4482 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/92afaaee-b11e-4bce-9967-673ca19b70f0-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:02 crc kubenswrapper[4482]: I1125 08:15:02.898618 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7sbh\" (UniqueName: \"kubernetes.io/projected/92afaaee-b11e-4bce-9967-673ca19b70f0-kube-api-access-r7sbh\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:03 crc kubenswrapper[4482]: I1125 08:15:03.323011 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" event={"ID":"92afaaee-b11e-4bce-9967-673ca19b70f0","Type":"ContainerDied","Data":"dd19a637c0551b36f52a0893de3ff790fab1627c082a4f286a9c93b656c1b58f"} Nov 25 08:15:03 crc kubenswrapper[4482]: I1125 08:15:03.323346 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd19a637c0551b36f52a0893de3ff790fab1627c082a4f286a9c93b656c1b58f" Nov 25 08:15:03 crc kubenswrapper[4482]: I1125 08:15:03.323047 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp" Nov 25 08:15:03 crc kubenswrapper[4482]: I1125 08:15:03.679260 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57"] Nov 25 08:15:03 crc kubenswrapper[4482]: I1125 08:15:03.686040 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400930-qcs57"] Nov 25 08:15:03 crc kubenswrapper[4482]: I1125 08:15:03.843963 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e69ffe-f523-4858-9f42-6f7d85a590a3" path="/var/lib/kubelet/pods/18e69ffe-f523-4858-9f42-6f7d85a590a3/volumes" Nov 25 08:15:09 crc kubenswrapper[4482]: E1125 08:15:09.037122 4482 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.26.133:43924->192.168.26.133:42749: write tcp 192.168.26.133:43924->192.168.26.133:42749: write: broken pipe Nov 25 08:15:41 crc kubenswrapper[4482]: I1125 08:15:41.375601 4482 scope.go:117] "RemoveContainer" containerID="458db17c88bfbd211181dc4a38c60cf866df53d27b5e826bae0eebaec2e88400" Nov 25 08:17:09 crc kubenswrapper[4482]: I1125 08:17:09.118244 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:17:09 crc kubenswrapper[4482]: I1125 08:17:09.118615 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:17:39 crc kubenswrapper[4482]: I1125 08:17:39.117846 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:17:39 crc kubenswrapper[4482]: I1125 08:17:39.118384 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:18:09 crc kubenswrapper[4482]: I1125 08:18:09.117998 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:18:09 crc kubenswrapper[4482]: I1125 08:18:09.118695 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:18:09 crc kubenswrapper[4482]: I1125 08:18:09.118759 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 08:18:09 crc kubenswrapper[4482]: I1125 08:18:09.120088 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1308a74aee46cf01529f8cee096203479d5ba7a7a014bb93c1f657cd68cb879a"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:18:09 crc kubenswrapper[4482]: I1125 08:18:09.120164 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://1308a74aee46cf01529f8cee096203479d5ba7a7a014bb93c1f657cd68cb879a" gracePeriod=600 Nov 25 08:18:09 crc kubenswrapper[4482]: I1125 08:18:09.746406 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="1308a74aee46cf01529f8cee096203479d5ba7a7a014bb93c1f657cd68cb879a" exitCode=0 Nov 25 08:18:09 crc kubenswrapper[4482]: I1125 08:18:09.748387 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"1308a74aee46cf01529f8cee096203479d5ba7a7a014bb93c1f657cd68cb879a"} Nov 25 08:18:09 crc kubenswrapper[4482]: I1125 08:18:09.748463 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe"} Nov 25 08:18:09 crc kubenswrapper[4482]: I1125 08:18:09.748493 4482 scope.go:117] "RemoveContainer" containerID="179d2e775d08bc66b6c0c31293fe951e347e24c90e092772118ae2db0a2b8cab" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.208855 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pdfsc"] Nov 25 08:19:12 crc kubenswrapper[4482]: E1125 08:19:12.209821 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92afaaee-b11e-4bce-9967-673ca19b70f0" containerName="collect-profiles" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.209836 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="92afaaee-b11e-4bce-9967-673ca19b70f0" containerName="collect-profiles" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.210043 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="92afaaee-b11e-4bce-9967-673ca19b70f0" containerName="collect-profiles" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.211837 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.229581 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pdfsc"] Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.278092 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aa623d2-09e0-426a-b6df-174c3a0e3e57-catalog-content\") pod \"community-operators-pdfsc\" (UID: \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\") " pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.278145 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aa623d2-09e0-426a-b6df-174c3a0e3e57-utilities\") pod \"community-operators-pdfsc\" (UID: \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\") " pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.278419 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m8bf\" (UniqueName: \"kubernetes.io/projected/0aa623d2-09e0-426a-b6df-174c3a0e3e57-kube-api-access-9m8bf\") pod \"community-operators-pdfsc\" (UID: \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\") " pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.380561 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aa623d2-09e0-426a-b6df-174c3a0e3e57-catalog-content\") pod \"community-operators-pdfsc\" (UID: \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\") " pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.380609 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aa623d2-09e0-426a-b6df-174c3a0e3e57-utilities\") pod \"community-operators-pdfsc\" (UID: \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\") " pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.380703 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m8bf\" (UniqueName: \"kubernetes.io/projected/0aa623d2-09e0-426a-b6df-174c3a0e3e57-kube-api-access-9m8bf\") pod \"community-operators-pdfsc\" (UID: \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\") " pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.380983 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aa623d2-09e0-426a-b6df-174c3a0e3e57-catalog-content\") pod \"community-operators-pdfsc\" (UID: \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\") " pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.381084 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aa623d2-09e0-426a-b6df-174c3a0e3e57-utilities\") pod \"community-operators-pdfsc\" (UID: \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\") " pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.410398 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m8bf\" (UniqueName: \"kubernetes.io/projected/0aa623d2-09e0-426a-b6df-174c3a0e3e57-kube-api-access-9m8bf\") pod \"community-operators-pdfsc\" (UID: \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\") " pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:12 crc kubenswrapper[4482]: I1125 08:19:12.537641 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:13 crc kubenswrapper[4482]: I1125 08:19:13.057752 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pdfsc"] Nov 25 08:19:13 crc kubenswrapper[4482]: I1125 08:19:13.290549 4482 generic.go:334] "Generic (PLEG): container finished" podID="0aa623d2-09e0-426a-b6df-174c3a0e3e57" containerID="bdeb35742411332d5e0caf3ee874110ea991daadeb4fc8e6f6a4c3ff08ee60c8" exitCode=0 Nov 25 08:19:13 crc kubenswrapper[4482]: I1125 08:19:13.290648 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdfsc" event={"ID":"0aa623d2-09e0-426a-b6df-174c3a0e3e57","Type":"ContainerDied","Data":"bdeb35742411332d5e0caf3ee874110ea991daadeb4fc8e6f6a4c3ff08ee60c8"} Nov 25 08:19:13 crc kubenswrapper[4482]: I1125 08:19:13.290889 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdfsc" event={"ID":"0aa623d2-09e0-426a-b6df-174c3a0e3e57","Type":"ContainerStarted","Data":"788c65ce411258a075648676f425e01f20a27836efb7fd9c01c56637e339af4a"} Nov 25 08:19:13 crc kubenswrapper[4482]: I1125 08:19:13.293185 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:19:14 crc kubenswrapper[4482]: I1125 08:19:14.300484 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdfsc" event={"ID":"0aa623d2-09e0-426a-b6df-174c3a0e3e57","Type":"ContainerStarted","Data":"f8c52c6ec84b67f27ee578041f63f28c10d4c601c3c38fc3602d9b65d9267ebc"} Nov 25 08:19:15 crc kubenswrapper[4482]: I1125 08:19:15.309504 4482 generic.go:334] "Generic (PLEG): container finished" podID="0aa623d2-09e0-426a-b6df-174c3a0e3e57" containerID="f8c52c6ec84b67f27ee578041f63f28c10d4c601c3c38fc3602d9b65d9267ebc" exitCode=0 Nov 25 08:19:15 crc kubenswrapper[4482]: I1125 08:19:15.309574 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdfsc" event={"ID":"0aa623d2-09e0-426a-b6df-174c3a0e3e57","Type":"ContainerDied","Data":"f8c52c6ec84b67f27ee578041f63f28c10d4c601c3c38fc3602d9b65d9267ebc"} Nov 25 08:19:16 crc kubenswrapper[4482]: I1125 08:19:16.319441 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdfsc" event={"ID":"0aa623d2-09e0-426a-b6df-174c3a0e3e57","Type":"ContainerStarted","Data":"6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd"} Nov 25 08:19:16 crc kubenswrapper[4482]: I1125 08:19:16.338911 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pdfsc" podStartSLOduration=1.8597941310000001 podStartE2EDuration="4.338896865s" podCreationTimestamp="2025-11-25 08:19:12 +0000 UTC" firstStartedPulling="2025-11-25 08:19:13.292959074 +0000 UTC m=+5527.781190323" lastFinishedPulling="2025-11-25 08:19:15.772061798 +0000 UTC m=+5530.260293057" observedRunningTime="2025-11-25 08:19:16.335702374 +0000 UTC m=+5530.823933652" watchObservedRunningTime="2025-11-25 08:19:16.338896865 +0000 UTC m=+5530.827128123" Nov 25 08:19:19 crc kubenswrapper[4482]: I1125 08:19:19.993794 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f848l"] Nov 25 08:19:19 crc kubenswrapper[4482]: I1125 08:19:19.995894 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:20 crc kubenswrapper[4482]: I1125 08:19:20.005657 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f848l"] Nov 25 08:19:20 crc kubenswrapper[4482]: I1125 08:19:20.023679 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-catalog-content\") pod \"certified-operators-f848l\" (UID: \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\") " pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:20 crc kubenswrapper[4482]: I1125 08:19:20.024064 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4n9f\" (UniqueName: \"kubernetes.io/projected/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-kube-api-access-v4n9f\") pod \"certified-operators-f848l\" (UID: \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\") " pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:20 crc kubenswrapper[4482]: I1125 08:19:20.024202 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-utilities\") pod \"certified-operators-f848l\" (UID: \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\") " pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:20 crc kubenswrapper[4482]: I1125 08:19:20.126359 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4n9f\" (UniqueName: \"kubernetes.io/projected/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-kube-api-access-v4n9f\") pod \"certified-operators-f848l\" (UID: \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\") " pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:20 crc kubenswrapper[4482]: I1125 08:19:20.126408 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-utilities\") pod \"certified-operators-f848l\" (UID: \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\") " pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:20 crc kubenswrapper[4482]: I1125 08:19:20.126464 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-catalog-content\") pod \"certified-operators-f848l\" (UID: \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\") " pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:20 crc kubenswrapper[4482]: I1125 08:19:20.126854 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-catalog-content\") pod \"certified-operators-f848l\" (UID: \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\") " pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:20 crc kubenswrapper[4482]: I1125 08:19:20.127022 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-utilities\") pod \"certified-operators-f848l\" (UID: \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\") " pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:20 crc kubenswrapper[4482]: I1125 08:19:20.143955 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4n9f\" (UniqueName: \"kubernetes.io/projected/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-kube-api-access-v4n9f\") pod \"certified-operators-f848l\" (UID: \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\") " pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:20 crc kubenswrapper[4482]: I1125 08:19:20.312898 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:20 crc kubenswrapper[4482]: W1125 08:19:20.801762 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49ff7e15_bad7_4c6c_8993_6c7cc0d8019f.slice/crio-fae00d2deb4a7574aa944c0882552864432133f7822421fad7011e6d10d80474 WatchSource:0}: Error finding container fae00d2deb4a7574aa944c0882552864432133f7822421fad7011e6d10d80474: Status 404 returned error can't find the container with id fae00d2deb4a7574aa944c0882552864432133f7822421fad7011e6d10d80474 Nov 25 08:19:20 crc kubenswrapper[4482]: I1125 08:19:20.813805 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f848l"] Nov 25 08:19:21 crc kubenswrapper[4482]: I1125 08:19:21.356609 4482 generic.go:334] "Generic (PLEG): container finished" podID="49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" containerID="b877ebf9507c49b8d0e0e558e2efe9f25ba9838d46ec13f2e702a319ac44f644" exitCode=0 Nov 25 08:19:21 crc kubenswrapper[4482]: I1125 08:19:21.356691 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f848l" event={"ID":"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f","Type":"ContainerDied","Data":"b877ebf9507c49b8d0e0e558e2efe9f25ba9838d46ec13f2e702a319ac44f644"} Nov 25 08:19:21 crc kubenswrapper[4482]: I1125 08:19:21.357056 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f848l" event={"ID":"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f","Type":"ContainerStarted","Data":"fae00d2deb4a7574aa944c0882552864432133f7822421fad7011e6d10d80474"} Nov 25 08:19:21 crc kubenswrapper[4482]: I1125 08:19:21.994947 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ln4p4"] Nov 25 08:19:21 crc kubenswrapper[4482]: I1125 08:19:21.998627 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.014490 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln4p4"] Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.060823 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tldb\" (UniqueName: \"kubernetes.io/projected/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-kube-api-access-9tldb\") pod \"redhat-marketplace-ln4p4\" (UID: \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\") " pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.060978 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-utilities\") pod \"redhat-marketplace-ln4p4\" (UID: \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\") " pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.061001 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-catalog-content\") pod \"redhat-marketplace-ln4p4\" (UID: \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\") " pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.162739 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tldb\" (UniqueName: \"kubernetes.io/projected/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-kube-api-access-9tldb\") pod \"redhat-marketplace-ln4p4\" (UID: \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\") " pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.162978 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-utilities\") pod \"redhat-marketplace-ln4p4\" (UID: \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\") " pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.163087 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-catalog-content\") pod \"redhat-marketplace-ln4p4\" (UID: \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\") " pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.163435 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-utilities\") pod \"redhat-marketplace-ln4p4\" (UID: \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\") " pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.163442 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-catalog-content\") pod \"redhat-marketplace-ln4p4\" (UID: \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\") " pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.177718 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tldb\" (UniqueName: \"kubernetes.io/projected/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-kube-api-access-9tldb\") pod \"redhat-marketplace-ln4p4\" (UID: \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\") " pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.371214 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.374351 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f848l" event={"ID":"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f","Type":"ContainerStarted","Data":"d0946beaa7e60caead08675f8b607955a401d8ee171654d1255c3c19ca124778"} Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.538532 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.538703 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.586930 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:22 crc kubenswrapper[4482]: I1125 08:19:22.819048 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln4p4"] Nov 25 08:19:22 crc kubenswrapper[4482]: W1125 08:19:22.822191 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a8a9ca5_633b_4345_a756_9ec9cd3f5d96.slice/crio-2a634fb6c92a39bbc286c6ba2ce41d21136494b9124699a96a9cac6d21a17376 WatchSource:0}: Error finding container 2a634fb6c92a39bbc286c6ba2ce41d21136494b9124699a96a9cac6d21a17376: Status 404 returned error can't find the container with id 2a634fb6c92a39bbc286c6ba2ce41d21136494b9124699a96a9cac6d21a17376 Nov 25 08:19:23 crc kubenswrapper[4482]: I1125 08:19:23.382571 4482 generic.go:334] "Generic (PLEG): container finished" podID="49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" containerID="d0946beaa7e60caead08675f8b607955a401d8ee171654d1255c3c19ca124778" exitCode=0 Nov 25 08:19:23 crc kubenswrapper[4482]: I1125 08:19:23.382641 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f848l" event={"ID":"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f","Type":"ContainerDied","Data":"d0946beaa7e60caead08675f8b607955a401d8ee171654d1255c3c19ca124778"} Nov 25 08:19:23 crc kubenswrapper[4482]: I1125 08:19:23.384190 4482 generic.go:334] "Generic (PLEG): container finished" podID="0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" containerID="9fab7f7e8aafb6b6ef460bf727f8a75f5e10e25be93d3e7d2e014bab11948dcd" exitCode=0 Nov 25 08:19:23 crc kubenswrapper[4482]: I1125 08:19:23.386195 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln4p4" event={"ID":"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96","Type":"ContainerDied","Data":"9fab7f7e8aafb6b6ef460bf727f8a75f5e10e25be93d3e7d2e014bab11948dcd"} Nov 25 08:19:23 crc kubenswrapper[4482]: I1125 08:19:23.386262 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln4p4" event={"ID":"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96","Type":"ContainerStarted","Data":"2a634fb6c92a39bbc286c6ba2ce41d21136494b9124699a96a9cac6d21a17376"} Nov 25 08:19:23 crc kubenswrapper[4482]: I1125 08:19:23.425054 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:24 crc kubenswrapper[4482]: I1125 08:19:24.393895 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln4p4" event={"ID":"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96","Type":"ContainerStarted","Data":"bd97a62995a9df15cbcf74d5b996428e54c7af9080bb8c8dfedf3ba8eab46ea1"} Nov 25 08:19:24 crc kubenswrapper[4482]: I1125 08:19:24.396313 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f848l" event={"ID":"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f","Type":"ContainerStarted","Data":"496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600"} Nov 25 08:19:24 crc kubenswrapper[4482]: I1125 08:19:24.426272 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f848l" podStartSLOduration=2.952688794 podStartE2EDuration="5.426257709s" podCreationTimestamp="2025-11-25 08:19:19 +0000 UTC" firstStartedPulling="2025-11-25 08:19:21.358954109 +0000 UTC m=+5535.847185368" lastFinishedPulling="2025-11-25 08:19:23.832523023 +0000 UTC m=+5538.320754283" observedRunningTime="2025-11-25 08:19:24.421828301 +0000 UTC m=+5538.910059559" watchObservedRunningTime="2025-11-25 08:19:24.426257709 +0000 UTC m=+5538.914488968" Nov 25 08:19:25 crc kubenswrapper[4482]: I1125 08:19:25.383946 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pdfsc"] Nov 25 08:19:25 crc kubenswrapper[4482]: I1125 08:19:25.405489 4482 generic.go:334] "Generic (PLEG): container finished" podID="0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" containerID="bd97a62995a9df15cbcf74d5b996428e54c7af9080bb8c8dfedf3ba8eab46ea1" exitCode=0 Nov 25 08:19:25 crc kubenswrapper[4482]: I1125 08:19:25.405650 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln4p4" event={"ID":"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96","Type":"ContainerDied","Data":"bd97a62995a9df15cbcf74d5b996428e54c7af9080bb8c8dfedf3ba8eab46ea1"} Nov 25 08:19:25 crc kubenswrapper[4482]: I1125 08:19:25.405847 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pdfsc" podUID="0aa623d2-09e0-426a-b6df-174c3a0e3e57" containerName="registry-server" containerID="cri-o://6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd" gracePeriod=2 Nov 25 08:19:25 crc kubenswrapper[4482]: I1125 08:19:25.844083 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:25 crc kubenswrapper[4482]: I1125 08:19:25.925551 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aa623d2-09e0-426a-b6df-174c3a0e3e57-utilities\") pod \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\" (UID: \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\") " Nov 25 08:19:25 crc kubenswrapper[4482]: I1125 08:19:25.925594 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aa623d2-09e0-426a-b6df-174c3a0e3e57-catalog-content\") pod \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\" (UID: \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\") " Nov 25 08:19:25 crc kubenswrapper[4482]: I1125 08:19:25.925720 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m8bf\" (UniqueName: \"kubernetes.io/projected/0aa623d2-09e0-426a-b6df-174c3a0e3e57-kube-api-access-9m8bf\") pod \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\" (UID: \"0aa623d2-09e0-426a-b6df-174c3a0e3e57\") " Nov 25 08:19:25 crc kubenswrapper[4482]: I1125 08:19:25.927576 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aa623d2-09e0-426a-b6df-174c3a0e3e57-utilities" (OuterVolumeSpecName: "utilities") pod "0aa623d2-09e0-426a-b6df-174c3a0e3e57" (UID: "0aa623d2-09e0-426a-b6df-174c3a0e3e57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:19:25 crc kubenswrapper[4482]: I1125 08:19:25.931727 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa623d2-09e0-426a-b6df-174c3a0e3e57-kube-api-access-9m8bf" (OuterVolumeSpecName: "kube-api-access-9m8bf") pod "0aa623d2-09e0-426a-b6df-174c3a0e3e57" (UID: "0aa623d2-09e0-426a-b6df-174c3a0e3e57"). InnerVolumeSpecName "kube-api-access-9m8bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:19:25 crc kubenswrapper[4482]: I1125 08:19:25.968751 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aa623d2-09e0-426a-b6df-174c3a0e3e57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0aa623d2-09e0-426a-b6df-174c3a0e3e57" (UID: "0aa623d2-09e0-426a-b6df-174c3a0e3e57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.027302 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aa623d2-09e0-426a-b6df-174c3a0e3e57-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.027329 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aa623d2-09e0-426a-b6df-174c3a0e3e57-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.027341 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m8bf\" (UniqueName: \"kubernetes.io/projected/0aa623d2-09e0-426a-b6df-174c3a0e3e57-kube-api-access-9m8bf\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.417472 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln4p4" event={"ID":"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96","Type":"ContainerStarted","Data":"c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612"} Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.420406 4482 generic.go:334] "Generic (PLEG): container finished" podID="0aa623d2-09e0-426a-b6df-174c3a0e3e57" containerID="6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd" exitCode=0 Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.420441 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdfsc" event={"ID":"0aa623d2-09e0-426a-b6df-174c3a0e3e57","Type":"ContainerDied","Data":"6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd"} Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.420462 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdfsc" event={"ID":"0aa623d2-09e0-426a-b6df-174c3a0e3e57","Type":"ContainerDied","Data":"788c65ce411258a075648676f425e01f20a27836efb7fd9c01c56637e339af4a"} Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.420479 4482 scope.go:117] "RemoveContainer" containerID="6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.420605 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdfsc" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.446665 4482 scope.go:117] "RemoveContainer" containerID="f8c52c6ec84b67f27ee578041f63f28c10d4c601c3c38fc3602d9b65d9267ebc" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.459457 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ln4p4" podStartSLOduration=2.941433403 podStartE2EDuration="5.459441555s" podCreationTimestamp="2025-11-25 08:19:21 +0000 UTC" firstStartedPulling="2025-11-25 08:19:23.386245359 +0000 UTC m=+5537.874476617" lastFinishedPulling="2025-11-25 08:19:25.904253511 +0000 UTC m=+5540.392484769" observedRunningTime="2025-11-25 08:19:26.438293085 +0000 UTC m=+5540.926524364" watchObservedRunningTime="2025-11-25 08:19:26.459441555 +0000 UTC m=+5540.947672814" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.463197 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pdfsc"] Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.467134 4482 scope.go:117] "RemoveContainer" containerID="bdeb35742411332d5e0caf3ee874110ea991daadeb4fc8e6f6a4c3ff08ee60c8" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.472994 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pdfsc"] Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.492082 4482 scope.go:117] "RemoveContainer" containerID="6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd" Nov 25 08:19:26 crc kubenswrapper[4482]: E1125 08:19:26.492473 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd\": container with ID starting with 6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd not found: ID does not exist" containerID="6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.492509 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd"} err="failed to get container status \"6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd\": rpc error: code = NotFound desc = could not find container \"6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd\": container with ID starting with 6f0740c7f46851bc8e6010f14ca1115003e624d845ea28d5a7977a3415f847cd not found: ID does not exist" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.492533 4482 scope.go:117] "RemoveContainer" containerID="f8c52c6ec84b67f27ee578041f63f28c10d4c601c3c38fc3602d9b65d9267ebc" Nov 25 08:19:26 crc kubenswrapper[4482]: E1125 08:19:26.492905 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8c52c6ec84b67f27ee578041f63f28c10d4c601c3c38fc3602d9b65d9267ebc\": container with ID starting with f8c52c6ec84b67f27ee578041f63f28c10d4c601c3c38fc3602d9b65d9267ebc not found: ID does not exist" containerID="f8c52c6ec84b67f27ee578041f63f28c10d4c601c3c38fc3602d9b65d9267ebc" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.492934 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8c52c6ec84b67f27ee578041f63f28c10d4c601c3c38fc3602d9b65d9267ebc"} err="failed to get container status \"f8c52c6ec84b67f27ee578041f63f28c10d4c601c3c38fc3602d9b65d9267ebc\": rpc error: code = NotFound desc = could not find container \"f8c52c6ec84b67f27ee578041f63f28c10d4c601c3c38fc3602d9b65d9267ebc\": container with ID starting with f8c52c6ec84b67f27ee578041f63f28c10d4c601c3c38fc3602d9b65d9267ebc not found: ID does not exist" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.492954 4482 scope.go:117] "RemoveContainer" containerID="bdeb35742411332d5e0caf3ee874110ea991daadeb4fc8e6f6a4c3ff08ee60c8" Nov 25 08:19:26 crc kubenswrapper[4482]: E1125 08:19:26.493277 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdeb35742411332d5e0caf3ee874110ea991daadeb4fc8e6f6a4c3ff08ee60c8\": container with ID starting with bdeb35742411332d5e0caf3ee874110ea991daadeb4fc8e6f6a4c3ff08ee60c8 not found: ID does not exist" containerID="bdeb35742411332d5e0caf3ee874110ea991daadeb4fc8e6f6a4c3ff08ee60c8" Nov 25 08:19:26 crc kubenswrapper[4482]: I1125 08:19:26.493302 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdeb35742411332d5e0caf3ee874110ea991daadeb4fc8e6f6a4c3ff08ee60c8"} err="failed to get container status \"bdeb35742411332d5e0caf3ee874110ea991daadeb4fc8e6f6a4c3ff08ee60c8\": rpc error: code = NotFound desc = could not find container \"bdeb35742411332d5e0caf3ee874110ea991daadeb4fc8e6f6a4c3ff08ee60c8\": container with ID starting with bdeb35742411332d5e0caf3ee874110ea991daadeb4fc8e6f6a4c3ff08ee60c8 not found: ID does not exist" Nov 25 08:19:27 crc kubenswrapper[4482]: I1125 08:19:27.839163 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aa623d2-09e0-426a-b6df-174c3a0e3e57" path="/var/lib/kubelet/pods/0aa623d2-09e0-426a-b6df-174c3a0e3e57/volumes" Nov 25 08:19:30 crc kubenswrapper[4482]: I1125 08:19:30.313033 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:30 crc kubenswrapper[4482]: I1125 08:19:30.313415 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:30 crc kubenswrapper[4482]: I1125 08:19:30.345916 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:30 crc kubenswrapper[4482]: I1125 08:19:30.490810 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:31 crc kubenswrapper[4482]: I1125 08:19:31.584862 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f848l"] Nov 25 08:19:32 crc kubenswrapper[4482]: I1125 08:19:32.372041 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:32 crc kubenswrapper[4482]: I1125 08:19:32.372083 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:32 crc kubenswrapper[4482]: I1125 08:19:32.405365 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:32 crc kubenswrapper[4482]: I1125 08:19:32.463543 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f848l" podUID="49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" containerName="registry-server" containerID="cri-o://496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600" gracePeriod=2 Nov 25 08:19:32 crc kubenswrapper[4482]: I1125 08:19:32.500198 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:32 crc kubenswrapper[4482]: I1125 08:19:32.901728 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:32 crc kubenswrapper[4482]: I1125 08:19:32.951647 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-catalog-content\") pod \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\" (UID: \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\") " Nov 25 08:19:32 crc kubenswrapper[4482]: I1125 08:19:32.951773 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-utilities\") pod \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\" (UID: \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\") " Nov 25 08:19:32 crc kubenswrapper[4482]: I1125 08:19:32.951898 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4n9f\" (UniqueName: \"kubernetes.io/projected/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-kube-api-access-v4n9f\") pod \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\" (UID: \"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f\") " Nov 25 08:19:32 crc kubenswrapper[4482]: I1125 08:19:32.953984 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-utilities" (OuterVolumeSpecName: "utilities") pod "49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" (UID: "49ff7e15-bad7-4c6c-8993-6c7cc0d8019f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:19:32 crc kubenswrapper[4482]: I1125 08:19:32.960055 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-kube-api-access-v4n9f" (OuterVolumeSpecName: "kube-api-access-v4n9f") pod "49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" (UID: "49ff7e15-bad7-4c6c-8993-6c7cc0d8019f"). InnerVolumeSpecName "kube-api-access-v4n9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:19:32 crc kubenswrapper[4482]: I1125 08:19:32.993111 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" (UID: "49ff7e15-bad7-4c6c-8993-6c7cc0d8019f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.054284 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4n9f\" (UniqueName: \"kubernetes.io/projected/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-kube-api-access-v4n9f\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.054313 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.054321 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.480349 4482 generic.go:334] "Generic (PLEG): container finished" podID="49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" containerID="496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600" exitCode=0 Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.480415 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f848l" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.480436 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f848l" event={"ID":"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f","Type":"ContainerDied","Data":"496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600"} Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.480701 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f848l" event={"ID":"49ff7e15-bad7-4c6c-8993-6c7cc0d8019f","Type":"ContainerDied","Data":"fae00d2deb4a7574aa944c0882552864432133f7822421fad7011e6d10d80474"} Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.480733 4482 scope.go:117] "RemoveContainer" containerID="496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.514545 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f848l"] Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.516970 4482 scope.go:117] "RemoveContainer" containerID="d0946beaa7e60caead08675f8b607955a401d8ee171654d1255c3c19ca124778" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.523410 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f848l"] Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.538804 4482 scope.go:117] "RemoveContainer" containerID="b877ebf9507c49b8d0e0e558e2efe9f25ba9838d46ec13f2e702a319ac44f644" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.568735 4482 scope.go:117] "RemoveContainer" containerID="496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600" Nov 25 08:19:33 crc kubenswrapper[4482]: E1125 08:19:33.569030 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600\": container with ID starting with 496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600 not found: ID does not exist" containerID="496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.569067 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600"} err="failed to get container status \"496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600\": rpc error: code = NotFound desc = could not find container \"496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600\": container with ID starting with 496966dddc7f8bff514168913a09be7ca520596ccd8f65201ee13b64406bb600 not found: ID does not exist" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.569094 4482 scope.go:117] "RemoveContainer" containerID="d0946beaa7e60caead08675f8b607955a401d8ee171654d1255c3c19ca124778" Nov 25 08:19:33 crc kubenswrapper[4482]: E1125 08:19:33.569426 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0946beaa7e60caead08675f8b607955a401d8ee171654d1255c3c19ca124778\": container with ID starting with d0946beaa7e60caead08675f8b607955a401d8ee171654d1255c3c19ca124778 not found: ID does not exist" containerID="d0946beaa7e60caead08675f8b607955a401d8ee171654d1255c3c19ca124778" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.569446 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0946beaa7e60caead08675f8b607955a401d8ee171654d1255c3c19ca124778"} err="failed to get container status \"d0946beaa7e60caead08675f8b607955a401d8ee171654d1255c3c19ca124778\": rpc error: code = NotFound desc = could not find container \"d0946beaa7e60caead08675f8b607955a401d8ee171654d1255c3c19ca124778\": container with ID starting with d0946beaa7e60caead08675f8b607955a401d8ee171654d1255c3c19ca124778 not found: ID does not exist" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.569458 4482 scope.go:117] "RemoveContainer" containerID="b877ebf9507c49b8d0e0e558e2efe9f25ba9838d46ec13f2e702a319ac44f644" Nov 25 08:19:33 crc kubenswrapper[4482]: E1125 08:19:33.569770 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b877ebf9507c49b8d0e0e558e2efe9f25ba9838d46ec13f2e702a319ac44f644\": container with ID starting with b877ebf9507c49b8d0e0e558e2efe9f25ba9838d46ec13f2e702a319ac44f644 not found: ID does not exist" containerID="b877ebf9507c49b8d0e0e558e2efe9f25ba9838d46ec13f2e702a319ac44f644" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.569802 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b877ebf9507c49b8d0e0e558e2efe9f25ba9838d46ec13f2e702a319ac44f644"} err="failed to get container status \"b877ebf9507c49b8d0e0e558e2efe9f25ba9838d46ec13f2e702a319ac44f644\": rpc error: code = NotFound desc = could not find container \"b877ebf9507c49b8d0e0e558e2efe9f25ba9838d46ec13f2e702a319ac44f644\": container with ID starting with b877ebf9507c49b8d0e0e558e2efe9f25ba9838d46ec13f2e702a319ac44f644 not found: ID does not exist" Nov 25 08:19:33 crc kubenswrapper[4482]: I1125 08:19:33.838811 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" path="/var/lib/kubelet/pods/49ff7e15-bad7-4c6c-8993-6c7cc0d8019f/volumes" Nov 25 08:19:34 crc kubenswrapper[4482]: I1125 08:19:34.784140 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln4p4"] Nov 25 08:19:34 crc kubenswrapper[4482]: I1125 08:19:34.784675 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ln4p4" podUID="0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" containerName="registry-server" containerID="cri-o://c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612" gracePeriod=2 Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.233742 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.288667 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-catalog-content\") pod \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\" (UID: \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\") " Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.288826 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-utilities\") pod \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\" (UID: \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\") " Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.288914 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tldb\" (UniqueName: \"kubernetes.io/projected/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-kube-api-access-9tldb\") pod \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\" (UID: \"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96\") " Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.289515 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-utilities" (OuterVolumeSpecName: "utilities") pod "0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" (UID: "0a8a9ca5-633b-4345-a756-9ec9cd3f5d96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.289969 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.293740 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-kube-api-access-9tldb" (OuterVolumeSpecName: "kube-api-access-9tldb") pod "0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" (UID: "0a8a9ca5-633b-4345-a756-9ec9cd3f5d96"). InnerVolumeSpecName "kube-api-access-9tldb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.302285 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" (UID: "0a8a9ca5-633b-4345-a756-9ec9cd3f5d96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.391062 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.391085 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tldb\" (UniqueName: \"kubernetes.io/projected/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96-kube-api-access-9tldb\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.499404 4482 generic.go:334] "Generic (PLEG): container finished" podID="0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" containerID="c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612" exitCode=0 Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.499446 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln4p4" event={"ID":"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96","Type":"ContainerDied","Data":"c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612"} Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.499474 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln4p4" event={"ID":"0a8a9ca5-633b-4345-a756-9ec9cd3f5d96","Type":"ContainerDied","Data":"2a634fb6c92a39bbc286c6ba2ce41d21136494b9124699a96a9cac6d21a17376"} Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.499491 4482 scope.go:117] "RemoveContainer" containerID="c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.499715 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln4p4" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.526007 4482 scope.go:117] "RemoveContainer" containerID="bd97a62995a9df15cbcf74d5b996428e54c7af9080bb8c8dfedf3ba8eab46ea1" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.530654 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln4p4"] Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.540451 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln4p4"] Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.557763 4482 scope.go:117] "RemoveContainer" containerID="9fab7f7e8aafb6b6ef460bf727f8a75f5e10e25be93d3e7d2e014bab11948dcd" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.578744 4482 scope.go:117] "RemoveContainer" containerID="c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612" Nov 25 08:19:35 crc kubenswrapper[4482]: E1125 08:19:35.579052 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612\": container with ID starting with c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612 not found: ID does not exist" containerID="c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.579198 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612"} err="failed to get container status \"c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612\": rpc error: code = NotFound desc = could not find container \"c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612\": container with ID starting with c498527d44ca29adcc8fae16fb8e8d8a95d1ef1db8285aea463a4d9030aba612 not found: ID does not exist" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.579334 4482 scope.go:117] "RemoveContainer" containerID="bd97a62995a9df15cbcf74d5b996428e54c7af9080bb8c8dfedf3ba8eab46ea1" Nov 25 08:19:35 crc kubenswrapper[4482]: E1125 08:19:35.579643 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd97a62995a9df15cbcf74d5b996428e54c7af9080bb8c8dfedf3ba8eab46ea1\": container with ID starting with bd97a62995a9df15cbcf74d5b996428e54c7af9080bb8c8dfedf3ba8eab46ea1 not found: ID does not exist" containerID="bd97a62995a9df15cbcf74d5b996428e54c7af9080bb8c8dfedf3ba8eab46ea1" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.579709 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd97a62995a9df15cbcf74d5b996428e54c7af9080bb8c8dfedf3ba8eab46ea1"} err="failed to get container status \"bd97a62995a9df15cbcf74d5b996428e54c7af9080bb8c8dfedf3ba8eab46ea1\": rpc error: code = NotFound desc = could not find container \"bd97a62995a9df15cbcf74d5b996428e54c7af9080bb8c8dfedf3ba8eab46ea1\": container with ID starting with bd97a62995a9df15cbcf74d5b996428e54c7af9080bb8c8dfedf3ba8eab46ea1 not found: ID does not exist" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.579729 4482 scope.go:117] "RemoveContainer" containerID="9fab7f7e8aafb6b6ef460bf727f8a75f5e10e25be93d3e7d2e014bab11948dcd" Nov 25 08:19:35 crc kubenswrapper[4482]: E1125 08:19:35.580096 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fab7f7e8aafb6b6ef460bf727f8a75f5e10e25be93d3e7d2e014bab11948dcd\": container with ID starting with 9fab7f7e8aafb6b6ef460bf727f8a75f5e10e25be93d3e7d2e014bab11948dcd not found: ID does not exist" containerID="9fab7f7e8aafb6b6ef460bf727f8a75f5e10e25be93d3e7d2e014bab11948dcd" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.580236 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fab7f7e8aafb6b6ef460bf727f8a75f5e10e25be93d3e7d2e014bab11948dcd"} err="failed to get container status \"9fab7f7e8aafb6b6ef460bf727f8a75f5e10e25be93d3e7d2e014bab11948dcd\": rpc error: code = NotFound desc = could not find container \"9fab7f7e8aafb6b6ef460bf727f8a75f5e10e25be93d3e7d2e014bab11948dcd\": container with ID starting with 9fab7f7e8aafb6b6ef460bf727f8a75f5e10e25be93d3e7d2e014bab11948dcd not found: ID does not exist" Nov 25 08:19:35 crc kubenswrapper[4482]: I1125 08:19:35.843359 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" path="/var/lib/kubelet/pods/0a8a9ca5-633b-4345-a756-9ec9cd3f5d96/volumes" Nov 25 08:20:09 crc kubenswrapper[4482]: I1125 08:20:09.117762 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:20:09 crc kubenswrapper[4482]: I1125 08:20:09.118085 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:20:27 crc kubenswrapper[4482]: E1125 08:20:27.893466 4482 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.26.133:37560->192.168.26.133:42749: write tcp 192.168.26.133:37560->192.168.26.133:42749: write: broken pipe Nov 25 08:20:39 crc kubenswrapper[4482]: I1125 08:20:39.118068 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:20:39 crc kubenswrapper[4482]: I1125 08:20:39.118440 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:21:09 crc kubenswrapper[4482]: I1125 08:21:09.118126 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:21:09 crc kubenswrapper[4482]: I1125 08:21:09.118764 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:21:09 crc kubenswrapper[4482]: I1125 08:21:09.118819 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 08:21:09 crc kubenswrapper[4482]: I1125 08:21:09.120006 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:21:09 crc kubenswrapper[4482]: I1125 08:21:09.120067 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" gracePeriod=600 Nov 25 08:21:09 crc kubenswrapper[4482]: E1125 08:21:09.244044 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:21:10 crc kubenswrapper[4482]: I1125 08:21:10.248984 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" exitCode=0 Nov 25 08:21:10 crc kubenswrapper[4482]: I1125 08:21:10.249032 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe"} Nov 25 08:21:10 crc kubenswrapper[4482]: I1125 08:21:10.249079 4482 scope.go:117] "RemoveContainer" containerID="1308a74aee46cf01529f8cee096203479d5ba7a7a014bb93c1f657cd68cb879a" Nov 25 08:21:10 crc kubenswrapper[4482]: I1125 08:21:10.250224 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:21:10 crc kubenswrapper[4482]: E1125 08:21:10.250613 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:21:20 crc kubenswrapper[4482]: I1125 08:21:20.831941 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:21:20 crc kubenswrapper[4482]: E1125 08:21:20.832683 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:21:31 crc kubenswrapper[4482]: I1125 08:21:31.830924 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:21:31 crc kubenswrapper[4482]: E1125 08:21:31.832064 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:21:43 crc kubenswrapper[4482]: I1125 08:21:43.831917 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:21:43 crc kubenswrapper[4482]: E1125 08:21:43.832910 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:21:58 crc kubenswrapper[4482]: I1125 08:21:58.831360 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:21:58 crc kubenswrapper[4482]: E1125 08:21:58.831894 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:22:13 crc kubenswrapper[4482]: I1125 08:22:13.830942 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:22:13 crc kubenswrapper[4482]: E1125 08:22:13.831630 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:22:25 crc kubenswrapper[4482]: I1125 08:22:25.836244 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:22:25 crc kubenswrapper[4482]: E1125 08:22:25.836786 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:22:38 crc kubenswrapper[4482]: I1125 08:22:38.831040 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:22:38 crc kubenswrapper[4482]: E1125 08:22:38.831935 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:22:52 crc kubenswrapper[4482]: I1125 08:22:52.830633 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:22:52 crc kubenswrapper[4482]: E1125 08:22:52.831319 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:23:05 crc kubenswrapper[4482]: I1125 08:23:05.835955 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:23:05 crc kubenswrapper[4482]: E1125 08:23:05.836751 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:23:18 crc kubenswrapper[4482]: I1125 08:23:18.831826 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:23:18 crc kubenswrapper[4482]: E1125 08:23:18.833601 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:23:31 crc kubenswrapper[4482]: I1125 08:23:31.831120 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:23:31 crc kubenswrapper[4482]: E1125 08:23:31.832195 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:23:43 crc kubenswrapper[4482]: I1125 08:23:43.831542 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:23:43 crc kubenswrapper[4482]: E1125 08:23:43.832635 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:23:54 crc kubenswrapper[4482]: I1125 08:23:54.830481 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:23:54 crc kubenswrapper[4482]: E1125 08:23:54.831238 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:24:09 crc kubenswrapper[4482]: I1125 08:24:09.831465 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:24:09 crc kubenswrapper[4482]: E1125 08:24:09.832200 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:24:22 crc kubenswrapper[4482]: I1125 08:24:22.831135 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:24:22 crc kubenswrapper[4482]: E1125 08:24:22.831958 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.267952 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7pjs9"] Nov 25 08:24:33 crc kubenswrapper[4482]: E1125 08:24:33.268675 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa623d2-09e0-426a-b6df-174c3a0e3e57" containerName="extract-utilities" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.268687 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa623d2-09e0-426a-b6df-174c3a0e3e57" containerName="extract-utilities" Nov 25 08:24:33 crc kubenswrapper[4482]: E1125 08:24:33.268700 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa623d2-09e0-426a-b6df-174c3a0e3e57" containerName="registry-server" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.268705 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa623d2-09e0-426a-b6df-174c3a0e3e57" containerName="registry-server" Nov 25 08:24:33 crc kubenswrapper[4482]: E1125 08:24:33.268716 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" containerName="registry-server" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.268723 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" containerName="registry-server" Nov 25 08:24:33 crc kubenswrapper[4482]: E1125 08:24:33.268732 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" containerName="extract-utilities" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.268739 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" containerName="extract-utilities" Nov 25 08:24:33 crc kubenswrapper[4482]: E1125 08:24:33.268751 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa623d2-09e0-426a-b6df-174c3a0e3e57" containerName="extract-content" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.268756 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa623d2-09e0-426a-b6df-174c3a0e3e57" containerName="extract-content" Nov 25 08:24:33 crc kubenswrapper[4482]: E1125 08:24:33.268764 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" containerName="extract-utilities" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.268769 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" containerName="extract-utilities" Nov 25 08:24:33 crc kubenswrapper[4482]: E1125 08:24:33.268785 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" containerName="registry-server" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.268790 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" containerName="registry-server" Nov 25 08:24:33 crc kubenswrapper[4482]: E1125 08:24:33.268804 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" containerName="extract-content" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.268809 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" containerName="extract-content" Nov 25 08:24:33 crc kubenswrapper[4482]: E1125 08:24:33.268818 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" containerName="extract-content" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.268823 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" containerName="extract-content" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.269019 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ff7e15-bad7-4c6c-8993-6c7cc0d8019f" containerName="registry-server" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.269030 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8a9ca5-633b-4345-a756-9ec9cd3f5d96" containerName="registry-server" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.269047 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa623d2-09e0-426a-b6df-174c3a0e3e57" containerName="registry-server" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.302793 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7pjs9"] Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.304341 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.339912 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a71985-19f1-4b72-8ceb-9c7d591c4710-utilities\") pod \"redhat-operators-7pjs9\" (UID: \"02a71985-19f1-4b72-8ceb-9c7d591c4710\") " pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.340418 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fclfp\" (UniqueName: \"kubernetes.io/projected/02a71985-19f1-4b72-8ceb-9c7d591c4710-kube-api-access-fclfp\") pod \"redhat-operators-7pjs9\" (UID: \"02a71985-19f1-4b72-8ceb-9c7d591c4710\") " pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.340577 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a71985-19f1-4b72-8ceb-9c7d591c4710-catalog-content\") pod \"redhat-operators-7pjs9\" (UID: \"02a71985-19f1-4b72-8ceb-9c7d591c4710\") " pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.443252 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a71985-19f1-4b72-8ceb-9c7d591c4710-utilities\") pod \"redhat-operators-7pjs9\" (UID: \"02a71985-19f1-4b72-8ceb-9c7d591c4710\") " pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.443792 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fclfp\" (UniqueName: \"kubernetes.io/projected/02a71985-19f1-4b72-8ceb-9c7d591c4710-kube-api-access-fclfp\") pod \"redhat-operators-7pjs9\" (UID: \"02a71985-19f1-4b72-8ceb-9c7d591c4710\") " pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.443847 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a71985-19f1-4b72-8ceb-9c7d591c4710-catalog-content\") pod \"redhat-operators-7pjs9\" (UID: \"02a71985-19f1-4b72-8ceb-9c7d591c4710\") " pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.444032 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a71985-19f1-4b72-8ceb-9c7d591c4710-utilities\") pod \"redhat-operators-7pjs9\" (UID: \"02a71985-19f1-4b72-8ceb-9c7d591c4710\") " pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.444572 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a71985-19f1-4b72-8ceb-9c7d591c4710-catalog-content\") pod \"redhat-operators-7pjs9\" (UID: \"02a71985-19f1-4b72-8ceb-9c7d591c4710\") " pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.475741 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fclfp\" (UniqueName: \"kubernetes.io/projected/02a71985-19f1-4b72-8ceb-9c7d591c4710-kube-api-access-fclfp\") pod \"redhat-operators-7pjs9\" (UID: \"02a71985-19f1-4b72-8ceb-9c7d591c4710\") " pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:33 crc kubenswrapper[4482]: I1125 08:24:33.628146 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:34 crc kubenswrapper[4482]: I1125 08:24:34.140059 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7pjs9"] Nov 25 08:24:34 crc kubenswrapper[4482]: I1125 08:24:34.874425 4482 generic.go:334] "Generic (PLEG): container finished" podID="02a71985-19f1-4b72-8ceb-9c7d591c4710" containerID="9b7ec5eca2293d053fd0df90042c56b41b24448830e5d16db9f9abd45866d59f" exitCode=0 Nov 25 08:24:34 crc kubenswrapper[4482]: I1125 08:24:34.874539 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pjs9" event={"ID":"02a71985-19f1-4b72-8ceb-9c7d591c4710","Type":"ContainerDied","Data":"9b7ec5eca2293d053fd0df90042c56b41b24448830e5d16db9f9abd45866d59f"} Nov 25 08:24:34 crc kubenswrapper[4482]: I1125 08:24:34.875738 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pjs9" event={"ID":"02a71985-19f1-4b72-8ceb-9c7d591c4710","Type":"ContainerStarted","Data":"f76ca1a0346f01ffe2ae7ba4f3ea7d5c956ec5c2cd1b57a861d0ef4791acbd54"} Nov 25 08:24:34 crc kubenswrapper[4482]: I1125 08:24:34.876984 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:24:35 crc kubenswrapper[4482]: I1125 08:24:35.889070 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pjs9" event={"ID":"02a71985-19f1-4b72-8ceb-9c7d591c4710","Type":"ContainerStarted","Data":"6fbdad553f65519d6e93e21ae9de0da9b54351bc2284b550b7cea5bb3641d767"} Nov 25 08:24:37 crc kubenswrapper[4482]: I1125 08:24:37.831756 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:24:37 crc kubenswrapper[4482]: E1125 08:24:37.832065 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:24:38 crc kubenswrapper[4482]: I1125 08:24:38.917812 4482 generic.go:334] "Generic (PLEG): container finished" podID="02a71985-19f1-4b72-8ceb-9c7d591c4710" containerID="6fbdad553f65519d6e93e21ae9de0da9b54351bc2284b550b7cea5bb3641d767" exitCode=0 Nov 25 08:24:38 crc kubenswrapper[4482]: I1125 08:24:38.917874 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pjs9" event={"ID":"02a71985-19f1-4b72-8ceb-9c7d591c4710","Type":"ContainerDied","Data":"6fbdad553f65519d6e93e21ae9de0da9b54351bc2284b550b7cea5bb3641d767"} Nov 25 08:24:39 crc kubenswrapper[4482]: I1125 08:24:39.927950 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pjs9" event={"ID":"02a71985-19f1-4b72-8ceb-9c7d591c4710","Type":"ContainerStarted","Data":"804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e"} Nov 25 08:24:39 crc kubenswrapper[4482]: I1125 08:24:39.946052 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7pjs9" podStartSLOduration=2.419649472 podStartE2EDuration="6.946037139s" podCreationTimestamp="2025-11-25 08:24:33 +0000 UTC" firstStartedPulling="2025-11-25 08:24:34.876667603 +0000 UTC m=+5849.364898862" lastFinishedPulling="2025-11-25 08:24:39.40305527 +0000 UTC m=+5853.891286529" observedRunningTime="2025-11-25 08:24:39.941378168 +0000 UTC m=+5854.429609417" watchObservedRunningTime="2025-11-25 08:24:39.946037139 +0000 UTC m=+5854.434268388" Nov 25 08:24:43 crc kubenswrapper[4482]: I1125 08:24:43.628923 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:43 crc kubenswrapper[4482]: I1125 08:24:43.629134 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:44 crc kubenswrapper[4482]: I1125 08:24:44.669146 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7pjs9" podUID="02a71985-19f1-4b72-8ceb-9c7d591c4710" containerName="registry-server" probeResult="failure" output=< Nov 25 08:24:44 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 08:24:44 crc kubenswrapper[4482]: > Nov 25 08:24:52 crc kubenswrapper[4482]: I1125 08:24:52.830623 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:24:52 crc kubenswrapper[4482]: E1125 08:24:52.831198 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:24:53 crc kubenswrapper[4482]: I1125 08:24:53.664403 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:53 crc kubenswrapper[4482]: I1125 08:24:53.701791 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:53 crc kubenswrapper[4482]: I1125 08:24:53.893314 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7pjs9"] Nov 25 08:24:55 crc kubenswrapper[4482]: I1125 08:24:55.025533 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7pjs9" podUID="02a71985-19f1-4b72-8ceb-9c7d591c4710" containerName="registry-server" containerID="cri-o://804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e" gracePeriod=2 Nov 25 08:24:55 crc kubenswrapper[4482]: I1125 08:24:55.443058 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:55 crc kubenswrapper[4482]: I1125 08:24:55.643328 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a71985-19f1-4b72-8ceb-9c7d591c4710-catalog-content\") pod \"02a71985-19f1-4b72-8ceb-9c7d591c4710\" (UID: \"02a71985-19f1-4b72-8ceb-9c7d591c4710\") " Nov 25 08:24:55 crc kubenswrapper[4482]: I1125 08:24:55.643531 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fclfp\" (UniqueName: \"kubernetes.io/projected/02a71985-19f1-4b72-8ceb-9c7d591c4710-kube-api-access-fclfp\") pod \"02a71985-19f1-4b72-8ceb-9c7d591c4710\" (UID: \"02a71985-19f1-4b72-8ceb-9c7d591c4710\") " Nov 25 08:24:55 crc kubenswrapper[4482]: I1125 08:24:55.643622 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a71985-19f1-4b72-8ceb-9c7d591c4710-utilities\") pod \"02a71985-19f1-4b72-8ceb-9c7d591c4710\" (UID: \"02a71985-19f1-4b72-8ceb-9c7d591c4710\") " Nov 25 08:24:55 crc kubenswrapper[4482]: I1125 08:24:55.644362 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a71985-19f1-4b72-8ceb-9c7d591c4710-utilities" (OuterVolumeSpecName: "utilities") pod "02a71985-19f1-4b72-8ceb-9c7d591c4710" (UID: "02a71985-19f1-4b72-8ceb-9c7d591c4710"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:24:55 crc kubenswrapper[4482]: I1125 08:24:55.645055 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02a71985-19f1-4b72-8ceb-9c7d591c4710-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:55 crc kubenswrapper[4482]: I1125 08:24:55.656388 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a71985-19f1-4b72-8ceb-9c7d591c4710-kube-api-access-fclfp" (OuterVolumeSpecName: "kube-api-access-fclfp") pod "02a71985-19f1-4b72-8ceb-9c7d591c4710" (UID: "02a71985-19f1-4b72-8ceb-9c7d591c4710"). InnerVolumeSpecName "kube-api-access-fclfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:24:55 crc kubenswrapper[4482]: I1125 08:24:55.710666 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a71985-19f1-4b72-8ceb-9c7d591c4710-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02a71985-19f1-4b72-8ceb-9c7d591c4710" (UID: "02a71985-19f1-4b72-8ceb-9c7d591c4710"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:24:55 crc kubenswrapper[4482]: I1125 08:24:55.746073 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fclfp\" (UniqueName: \"kubernetes.io/projected/02a71985-19f1-4b72-8ceb-9c7d591c4710-kube-api-access-fclfp\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:55 crc kubenswrapper[4482]: I1125 08:24:55.746199 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02a71985-19f1-4b72-8ceb-9c7d591c4710-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.033894 4482 generic.go:334] "Generic (PLEG): container finished" podID="02a71985-19f1-4b72-8ceb-9c7d591c4710" containerID="804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e" exitCode=0 Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.033939 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pjs9" event={"ID":"02a71985-19f1-4b72-8ceb-9c7d591c4710","Type":"ContainerDied","Data":"804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e"} Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.033970 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pjs9" event={"ID":"02a71985-19f1-4b72-8ceb-9c7d591c4710","Type":"ContainerDied","Data":"f76ca1a0346f01ffe2ae7ba4f3ea7d5c956ec5c2cd1b57a861d0ef4791acbd54"} Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.033986 4482 scope.go:117] "RemoveContainer" containerID="804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e" Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.034029 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7pjs9" Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.050218 4482 scope.go:117] "RemoveContainer" containerID="6fbdad553f65519d6e93e21ae9de0da9b54351bc2284b550b7cea5bb3641d767" Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.059484 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7pjs9"] Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.066274 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7pjs9"] Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.072609 4482 scope.go:117] "RemoveContainer" containerID="9b7ec5eca2293d053fd0df90042c56b41b24448830e5d16db9f9abd45866d59f" Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.107718 4482 scope.go:117] "RemoveContainer" containerID="804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e" Nov 25 08:24:56 crc kubenswrapper[4482]: E1125 08:24:56.108022 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e\": container with ID starting with 804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e not found: ID does not exist" containerID="804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e" Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.108054 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e"} err="failed to get container status \"804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e\": rpc error: code = NotFound desc = could not find container \"804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e\": container with ID starting with 804c4c24594f7af54a6890dc782ca16d226eb170bec085f313b5476dae99461e not found: ID does not exist" Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.108074 4482 scope.go:117] "RemoveContainer" containerID="6fbdad553f65519d6e93e21ae9de0da9b54351bc2284b550b7cea5bb3641d767" Nov 25 08:24:56 crc kubenswrapper[4482]: E1125 08:24:56.108334 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbdad553f65519d6e93e21ae9de0da9b54351bc2284b550b7cea5bb3641d767\": container with ID starting with 6fbdad553f65519d6e93e21ae9de0da9b54351bc2284b550b7cea5bb3641d767 not found: ID does not exist" containerID="6fbdad553f65519d6e93e21ae9de0da9b54351bc2284b550b7cea5bb3641d767" Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.108361 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbdad553f65519d6e93e21ae9de0da9b54351bc2284b550b7cea5bb3641d767"} err="failed to get container status \"6fbdad553f65519d6e93e21ae9de0da9b54351bc2284b550b7cea5bb3641d767\": rpc error: code = NotFound desc = could not find container \"6fbdad553f65519d6e93e21ae9de0da9b54351bc2284b550b7cea5bb3641d767\": container with ID starting with 6fbdad553f65519d6e93e21ae9de0da9b54351bc2284b550b7cea5bb3641d767 not found: ID does not exist" Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.108373 4482 scope.go:117] "RemoveContainer" containerID="9b7ec5eca2293d053fd0df90042c56b41b24448830e5d16db9f9abd45866d59f" Nov 25 08:24:56 crc kubenswrapper[4482]: E1125 08:24:56.108569 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b7ec5eca2293d053fd0df90042c56b41b24448830e5d16db9f9abd45866d59f\": container with ID starting with 9b7ec5eca2293d053fd0df90042c56b41b24448830e5d16db9f9abd45866d59f not found: ID does not exist" containerID="9b7ec5eca2293d053fd0df90042c56b41b24448830e5d16db9f9abd45866d59f" Nov 25 08:24:56 crc kubenswrapper[4482]: I1125 08:24:56.108594 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7ec5eca2293d053fd0df90042c56b41b24448830e5d16db9f9abd45866d59f"} err="failed to get container status \"9b7ec5eca2293d053fd0df90042c56b41b24448830e5d16db9f9abd45866d59f\": rpc error: code = NotFound desc = could not find container \"9b7ec5eca2293d053fd0df90042c56b41b24448830e5d16db9f9abd45866d59f\": container with ID starting with 9b7ec5eca2293d053fd0df90042c56b41b24448830e5d16db9f9abd45866d59f not found: ID does not exist" Nov 25 08:24:57 crc kubenswrapper[4482]: I1125 08:24:57.839357 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a71985-19f1-4b72-8ceb-9c7d591c4710" path="/var/lib/kubelet/pods/02a71985-19f1-4b72-8ceb-9c7d591c4710/volumes" Nov 25 08:25:06 crc kubenswrapper[4482]: I1125 08:25:06.830208 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:25:06 crc kubenswrapper[4482]: E1125 08:25:06.830721 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:25:17 crc kubenswrapper[4482]: I1125 08:25:17.831521 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:25:17 crc kubenswrapper[4482]: E1125 08:25:17.832619 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:25:30 crc kubenswrapper[4482]: I1125 08:25:30.832063 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:25:30 crc kubenswrapper[4482]: E1125 08:25:30.832791 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:25:42 crc kubenswrapper[4482]: I1125 08:25:42.831361 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:25:42 crc kubenswrapper[4482]: E1125 08:25:42.831873 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:25:56 crc kubenswrapper[4482]: I1125 08:25:56.832020 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:25:56 crc kubenswrapper[4482]: E1125 08:25:56.832560 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:26:09 crc kubenswrapper[4482]: I1125 08:26:09.830579 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:26:10 crc kubenswrapper[4482]: I1125 08:26:10.609885 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"3ad1d9f4f59ae368d1166d9c47a45b3d6870f9db2645ff953452c043244018d2"} Nov 25 08:28:09 crc kubenswrapper[4482]: I1125 08:28:09.118310 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:28:09 crc kubenswrapper[4482]: I1125 08:28:09.118983 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:28:39 crc kubenswrapper[4482]: I1125 08:28:39.118012 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:28:39 crc kubenswrapper[4482]: I1125 08:28:39.118533 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:29:09 crc kubenswrapper[4482]: I1125 08:29:09.117349 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:29:09 crc kubenswrapper[4482]: I1125 08:29:09.117875 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:29:09 crc kubenswrapper[4482]: I1125 08:29:09.117928 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 08:29:09 crc kubenswrapper[4482]: I1125 08:29:09.118583 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ad1d9f4f59ae368d1166d9c47a45b3d6870f9db2645ff953452c043244018d2"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:29:09 crc kubenswrapper[4482]: I1125 08:29:09.118638 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://3ad1d9f4f59ae368d1166d9c47a45b3d6870f9db2645ff953452c043244018d2" gracePeriod=600 Nov 25 08:29:10 crc kubenswrapper[4482]: I1125 08:29:10.155825 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="3ad1d9f4f59ae368d1166d9c47a45b3d6870f9db2645ff953452c043244018d2" exitCode=0 Nov 25 08:29:10 crc kubenswrapper[4482]: I1125 08:29:10.155891 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"3ad1d9f4f59ae368d1166d9c47a45b3d6870f9db2645ff953452c043244018d2"} Nov 25 08:29:10 crc kubenswrapper[4482]: I1125 08:29:10.156593 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9"} Nov 25 08:29:10 crc kubenswrapper[4482]: I1125 08:29:10.156639 4482 scope.go:117] "RemoveContainer" containerID="6e34f46d43d2fd1a406048794b4ec3953f97ecdb8551bac9146104f98f6b63fe" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.154899 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw"] Nov 25 08:30:00 crc kubenswrapper[4482]: E1125 08:30:00.155766 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a71985-19f1-4b72-8ceb-9c7d591c4710" containerName="extract-content" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.155779 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a71985-19f1-4b72-8ceb-9c7d591c4710" containerName="extract-content" Nov 25 08:30:00 crc kubenswrapper[4482]: E1125 08:30:00.155794 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a71985-19f1-4b72-8ceb-9c7d591c4710" containerName="registry-server" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.155800 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a71985-19f1-4b72-8ceb-9c7d591c4710" containerName="registry-server" Nov 25 08:30:00 crc kubenswrapper[4482]: E1125 08:30:00.155812 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a71985-19f1-4b72-8ceb-9c7d591c4710" containerName="extract-utilities" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.155818 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a71985-19f1-4b72-8ceb-9c7d591c4710" containerName="extract-utilities" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.156013 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a71985-19f1-4b72-8ceb-9c7d591c4710" containerName="registry-server" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.156670 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.158374 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.159967 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.188111 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw"] Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.304467 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/441313e7-57b2-49ed-bf53-140defacbac3-config-volume\") pod \"collect-profiles-29400990-4tqdw\" (UID: \"441313e7-57b2-49ed-bf53-140defacbac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.304827 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltgrn\" (UniqueName: \"kubernetes.io/projected/441313e7-57b2-49ed-bf53-140defacbac3-kube-api-access-ltgrn\") pod \"collect-profiles-29400990-4tqdw\" (UID: \"441313e7-57b2-49ed-bf53-140defacbac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.304937 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/441313e7-57b2-49ed-bf53-140defacbac3-secret-volume\") pod \"collect-profiles-29400990-4tqdw\" (UID: \"441313e7-57b2-49ed-bf53-140defacbac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.375891 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vwrpt"] Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.382882 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.411021 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/441313e7-57b2-49ed-bf53-140defacbac3-config-volume\") pod \"collect-profiles-29400990-4tqdw\" (UID: \"441313e7-57b2-49ed-bf53-140defacbac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.409672 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/441313e7-57b2-49ed-bf53-140defacbac3-config-volume\") pod \"collect-profiles-29400990-4tqdw\" (UID: \"441313e7-57b2-49ed-bf53-140defacbac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.411599 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltgrn\" (UniqueName: \"kubernetes.io/projected/441313e7-57b2-49ed-bf53-140defacbac3-kube-api-access-ltgrn\") pod \"collect-profiles-29400990-4tqdw\" (UID: \"441313e7-57b2-49ed-bf53-140defacbac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.412094 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/441313e7-57b2-49ed-bf53-140defacbac3-secret-volume\") pod \"collect-profiles-29400990-4tqdw\" (UID: \"441313e7-57b2-49ed-bf53-140defacbac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.424017 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwrpt"] Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.434821 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/441313e7-57b2-49ed-bf53-140defacbac3-secret-volume\") pod \"collect-profiles-29400990-4tqdw\" (UID: \"441313e7-57b2-49ed-bf53-140defacbac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.441462 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltgrn\" (UniqueName: \"kubernetes.io/projected/441313e7-57b2-49ed-bf53-140defacbac3-kube-api-access-ltgrn\") pod \"collect-profiles-29400990-4tqdw\" (UID: \"441313e7-57b2-49ed-bf53-140defacbac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.488310 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.529023 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-utilities\") pod \"certified-operators-vwrpt\" (UID: \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\") " pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.529232 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfj88\" (UniqueName: \"kubernetes.io/projected/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-kube-api-access-wfj88\") pod \"certified-operators-vwrpt\" (UID: \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\") " pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.529440 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-catalog-content\") pod \"certified-operators-vwrpt\" (UID: \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\") " pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.632667 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-utilities\") pod \"certified-operators-vwrpt\" (UID: \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\") " pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.633076 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfj88\" (UniqueName: \"kubernetes.io/projected/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-kube-api-access-wfj88\") pod \"certified-operators-vwrpt\" (UID: \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\") " pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.633275 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-catalog-content\") pod \"certified-operators-vwrpt\" (UID: \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\") " pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.634203 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-catalog-content\") pod \"certified-operators-vwrpt\" (UID: \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\") " pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.634435 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-utilities\") pod \"certified-operators-vwrpt\" (UID: \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\") " pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.655356 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfj88\" (UniqueName: \"kubernetes.io/projected/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-kube-api-access-wfj88\") pod \"certified-operators-vwrpt\" (UID: \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\") " pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:00 crc kubenswrapper[4482]: I1125 08:30:00.709780 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:01 crc kubenswrapper[4482]: I1125 08:30:01.019834 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw"] Nov 25 08:30:01 crc kubenswrapper[4482]: I1125 08:30:01.229955 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwrpt"] Nov 25 08:30:01 crc kubenswrapper[4482]: W1125 08:30:01.233458 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ab8dac6_2fee_4b21_83d8_06dd84a1365b.slice/crio-7801cdbc8821a772eeff94be587291a2d5d9ea376d47b8af6849128a56584d45 WatchSource:0}: Error finding container 7801cdbc8821a772eeff94be587291a2d5d9ea376d47b8af6849128a56584d45: Status 404 returned error can't find the container with id 7801cdbc8821a772eeff94be587291a2d5d9ea376d47b8af6849128a56584d45 Nov 25 08:30:01 crc kubenswrapper[4482]: I1125 08:30:01.600835 4482 generic.go:334] "Generic (PLEG): container finished" podID="3ab8dac6-2fee-4b21-83d8-06dd84a1365b" containerID="4b6fb6ed7282f33b6e7036d91e411f90d49ab071e9e101e08db05232d721d0c2" exitCode=0 Nov 25 08:30:01 crc kubenswrapper[4482]: I1125 08:30:01.600945 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwrpt" event={"ID":"3ab8dac6-2fee-4b21-83d8-06dd84a1365b","Type":"ContainerDied","Data":"4b6fb6ed7282f33b6e7036d91e411f90d49ab071e9e101e08db05232d721d0c2"} Nov 25 08:30:01 crc kubenswrapper[4482]: I1125 08:30:01.601319 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwrpt" event={"ID":"3ab8dac6-2fee-4b21-83d8-06dd84a1365b","Type":"ContainerStarted","Data":"7801cdbc8821a772eeff94be587291a2d5d9ea376d47b8af6849128a56584d45"} Nov 25 08:30:01 crc kubenswrapper[4482]: I1125 08:30:01.603014 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:30:01 crc kubenswrapper[4482]: I1125 08:30:01.604120 4482 generic.go:334] "Generic (PLEG): container finished" podID="441313e7-57b2-49ed-bf53-140defacbac3" containerID="457c1f5c41e9fc9e29a4733cb367f004dfa81fac8b0ede95940208bd5bca7682" exitCode=0 Nov 25 08:30:01 crc kubenswrapper[4482]: I1125 08:30:01.604164 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" event={"ID":"441313e7-57b2-49ed-bf53-140defacbac3","Type":"ContainerDied","Data":"457c1f5c41e9fc9e29a4733cb367f004dfa81fac8b0ede95940208bd5bca7682"} Nov 25 08:30:01 crc kubenswrapper[4482]: I1125 08:30:01.604230 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" event={"ID":"441313e7-57b2-49ed-bf53-140defacbac3","Type":"ContainerStarted","Data":"79f82c5e29b2c0fce0ae3148a3bf6e519398bd58566ed7568536f39cb0ae97f8"} Nov 25 08:30:02 crc kubenswrapper[4482]: I1125 08:30:02.904069 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:02 crc kubenswrapper[4482]: I1125 08:30:02.986006 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltgrn\" (UniqueName: \"kubernetes.io/projected/441313e7-57b2-49ed-bf53-140defacbac3-kube-api-access-ltgrn\") pod \"441313e7-57b2-49ed-bf53-140defacbac3\" (UID: \"441313e7-57b2-49ed-bf53-140defacbac3\") " Nov 25 08:30:02 crc kubenswrapper[4482]: I1125 08:30:02.986130 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/441313e7-57b2-49ed-bf53-140defacbac3-config-volume\") pod \"441313e7-57b2-49ed-bf53-140defacbac3\" (UID: \"441313e7-57b2-49ed-bf53-140defacbac3\") " Nov 25 08:30:02 crc kubenswrapper[4482]: I1125 08:30:02.986220 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/441313e7-57b2-49ed-bf53-140defacbac3-secret-volume\") pod \"441313e7-57b2-49ed-bf53-140defacbac3\" (UID: \"441313e7-57b2-49ed-bf53-140defacbac3\") " Nov 25 08:30:02 crc kubenswrapper[4482]: I1125 08:30:02.986669 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441313e7-57b2-49ed-bf53-140defacbac3-config-volume" (OuterVolumeSpecName: "config-volume") pod "441313e7-57b2-49ed-bf53-140defacbac3" (UID: "441313e7-57b2-49ed-bf53-140defacbac3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:30:02 crc kubenswrapper[4482]: I1125 08:30:02.987198 4482 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/441313e7-57b2-49ed-bf53-140defacbac3-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:02 crc kubenswrapper[4482]: I1125 08:30:02.991466 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441313e7-57b2-49ed-bf53-140defacbac3-kube-api-access-ltgrn" (OuterVolumeSpecName: "kube-api-access-ltgrn") pod "441313e7-57b2-49ed-bf53-140defacbac3" (UID: "441313e7-57b2-49ed-bf53-140defacbac3"). InnerVolumeSpecName "kube-api-access-ltgrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:02 crc kubenswrapper[4482]: I1125 08:30:02.992682 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/441313e7-57b2-49ed-bf53-140defacbac3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "441313e7-57b2-49ed-bf53-140defacbac3" (UID: "441313e7-57b2-49ed-bf53-140defacbac3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:03 crc kubenswrapper[4482]: I1125 08:30:03.088738 4482 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/441313e7-57b2-49ed-bf53-140defacbac3-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:03 crc kubenswrapper[4482]: I1125 08:30:03.088775 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltgrn\" (UniqueName: \"kubernetes.io/projected/441313e7-57b2-49ed-bf53-140defacbac3-kube-api-access-ltgrn\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:03 crc kubenswrapper[4482]: I1125 08:30:03.623124 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" event={"ID":"441313e7-57b2-49ed-bf53-140defacbac3","Type":"ContainerDied","Data":"79f82c5e29b2c0fce0ae3148a3bf6e519398bd58566ed7568536f39cb0ae97f8"} Nov 25 08:30:03 crc kubenswrapper[4482]: I1125 08:30:03.623583 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79f82c5e29b2c0fce0ae3148a3bf6e519398bd58566ed7568536f39cb0ae97f8" Nov 25 08:30:03 crc kubenswrapper[4482]: I1125 08:30:03.623185 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-4tqdw" Nov 25 08:30:03 crc kubenswrapper[4482]: I1125 08:30:03.625743 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwrpt" event={"ID":"3ab8dac6-2fee-4b21-83d8-06dd84a1365b","Type":"ContainerStarted","Data":"41350480602aed9239a4d98f5e40533f07873f9a1529e099d18586f2bcea1be8"} Nov 25 08:30:03 crc kubenswrapper[4482]: I1125 08:30:03.975224 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv"] Nov 25 08:30:04 crc kubenswrapper[4482]: I1125 08:30:04.004105 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400945-p5qrv"] Nov 25 08:30:04 crc kubenswrapper[4482]: I1125 08:30:04.635442 4482 generic.go:334] "Generic (PLEG): container finished" podID="3ab8dac6-2fee-4b21-83d8-06dd84a1365b" containerID="41350480602aed9239a4d98f5e40533f07873f9a1529e099d18586f2bcea1be8" exitCode=0 Nov 25 08:30:04 crc kubenswrapper[4482]: I1125 08:30:04.635556 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwrpt" event={"ID":"3ab8dac6-2fee-4b21-83d8-06dd84a1365b","Type":"ContainerDied","Data":"41350480602aed9239a4d98f5e40533f07873f9a1529e099d18586f2bcea1be8"} Nov 25 08:30:05 crc kubenswrapper[4482]: E1125 08:30:05.202770 4482 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.26.133:36990->192.168.26.133:42749: write tcp 192.168.26.133:36990->192.168.26.133:42749: write: connection reset by peer Nov 25 08:30:05 crc kubenswrapper[4482]: I1125 08:30:05.652313 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwrpt" event={"ID":"3ab8dac6-2fee-4b21-83d8-06dd84a1365b","Type":"ContainerStarted","Data":"0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242"} Nov 25 08:30:05 crc kubenswrapper[4482]: I1125 08:30:05.671311 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vwrpt" podStartSLOduration=2.152547755 podStartE2EDuration="5.671292142s" podCreationTimestamp="2025-11-25 08:30:00 +0000 UTC" firstStartedPulling="2025-11-25 08:30:01.60279183 +0000 UTC m=+6176.091023089" lastFinishedPulling="2025-11-25 08:30:05.121536217 +0000 UTC m=+6179.609767476" observedRunningTime="2025-11-25 08:30:05.66940179 +0000 UTC m=+6180.157633049" watchObservedRunningTime="2025-11-25 08:30:05.671292142 +0000 UTC m=+6180.159523401" Nov 25 08:30:05 crc kubenswrapper[4482]: I1125 08:30:05.850269 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6231fe7-2d66-480b-93fb-5bb66a84dcaf" path="/var/lib/kubelet/pods/c6231fe7-2d66-480b-93fb-5bb66a84dcaf/volumes" Nov 25 08:30:05 crc kubenswrapper[4482]: E1125 08:30:05.894297 4482 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.26.133:37030->192.168.26.133:42749: write tcp 192.168.26.133:37030->192.168.26.133:42749: write: broken pipe Nov 25 08:30:10 crc kubenswrapper[4482]: I1125 08:30:10.711002 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:10 crc kubenswrapper[4482]: I1125 08:30:10.711757 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:10 crc kubenswrapper[4482]: I1125 08:30:10.763301 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:11 crc kubenswrapper[4482]: I1125 08:30:11.743745 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:11 crc kubenswrapper[4482]: I1125 08:30:11.805723 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwrpt"] Nov 25 08:30:13 crc kubenswrapper[4482]: I1125 08:30:13.728404 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vwrpt" podUID="3ab8dac6-2fee-4b21-83d8-06dd84a1365b" containerName="registry-server" containerID="cri-o://0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242" gracePeriod=2 Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.132516 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.150628 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-catalog-content\") pod \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\" (UID: \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\") " Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.151122 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfj88\" (UniqueName: \"kubernetes.io/projected/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-kube-api-access-wfj88\") pod \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\" (UID: \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\") " Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.151151 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-utilities\") pod \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\" (UID: \"3ab8dac6-2fee-4b21-83d8-06dd84a1365b\") " Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.151712 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-utilities" (OuterVolumeSpecName: "utilities") pod "3ab8dac6-2fee-4b21-83d8-06dd84a1365b" (UID: "3ab8dac6-2fee-4b21-83d8-06dd84a1365b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.157908 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-kube-api-access-wfj88" (OuterVolumeSpecName: "kube-api-access-wfj88") pod "3ab8dac6-2fee-4b21-83d8-06dd84a1365b" (UID: "3ab8dac6-2fee-4b21-83d8-06dd84a1365b"). InnerVolumeSpecName "kube-api-access-wfj88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.187518 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ab8dac6-2fee-4b21-83d8-06dd84a1365b" (UID: "3ab8dac6-2fee-4b21-83d8-06dd84a1365b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.253430 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfj88\" (UniqueName: \"kubernetes.io/projected/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-kube-api-access-wfj88\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.253465 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.253476 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ab8dac6-2fee-4b21-83d8-06dd84a1365b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.748811 4482 generic.go:334] "Generic (PLEG): container finished" podID="3ab8dac6-2fee-4b21-83d8-06dd84a1365b" containerID="0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242" exitCode=0 Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.748855 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwrpt" event={"ID":"3ab8dac6-2fee-4b21-83d8-06dd84a1365b","Type":"ContainerDied","Data":"0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242"} Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.748874 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwrpt" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.748888 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwrpt" event={"ID":"3ab8dac6-2fee-4b21-83d8-06dd84a1365b","Type":"ContainerDied","Data":"7801cdbc8821a772eeff94be587291a2d5d9ea376d47b8af6849128a56584d45"} Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.748903 4482 scope.go:117] "RemoveContainer" containerID="0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.778383 4482 scope.go:117] "RemoveContainer" containerID="41350480602aed9239a4d98f5e40533f07873f9a1529e099d18586f2bcea1be8" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.780878 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwrpt"] Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.797289 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vwrpt"] Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.806760 4482 scope.go:117] "RemoveContainer" containerID="4b6fb6ed7282f33b6e7036d91e411f90d49ab071e9e101e08db05232d721d0c2" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.834232 4482 scope.go:117] "RemoveContainer" containerID="0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242" Nov 25 08:30:14 crc kubenswrapper[4482]: E1125 08:30:14.834579 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242\": container with ID starting with 0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242 not found: ID does not exist" containerID="0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.834613 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242"} err="failed to get container status \"0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242\": rpc error: code = NotFound desc = could not find container \"0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242\": container with ID starting with 0d916432e89123e7a69589aecfaf3eb4e703a0e147e6a4cd6617e53957b4b242 not found: ID does not exist" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.834635 4482 scope.go:117] "RemoveContainer" containerID="41350480602aed9239a4d98f5e40533f07873f9a1529e099d18586f2bcea1be8" Nov 25 08:30:14 crc kubenswrapper[4482]: E1125 08:30:14.834901 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41350480602aed9239a4d98f5e40533f07873f9a1529e099d18586f2bcea1be8\": container with ID starting with 41350480602aed9239a4d98f5e40533f07873f9a1529e099d18586f2bcea1be8 not found: ID does not exist" containerID="41350480602aed9239a4d98f5e40533f07873f9a1529e099d18586f2bcea1be8" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.834922 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41350480602aed9239a4d98f5e40533f07873f9a1529e099d18586f2bcea1be8"} err="failed to get container status \"41350480602aed9239a4d98f5e40533f07873f9a1529e099d18586f2bcea1be8\": rpc error: code = NotFound desc = could not find container \"41350480602aed9239a4d98f5e40533f07873f9a1529e099d18586f2bcea1be8\": container with ID starting with 41350480602aed9239a4d98f5e40533f07873f9a1529e099d18586f2bcea1be8 not found: ID does not exist" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.834934 4482 scope.go:117] "RemoveContainer" containerID="4b6fb6ed7282f33b6e7036d91e411f90d49ab071e9e101e08db05232d721d0c2" Nov 25 08:30:14 crc kubenswrapper[4482]: E1125 08:30:14.835209 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b6fb6ed7282f33b6e7036d91e411f90d49ab071e9e101e08db05232d721d0c2\": container with ID starting with 4b6fb6ed7282f33b6e7036d91e411f90d49ab071e9e101e08db05232d721d0c2 not found: ID does not exist" containerID="4b6fb6ed7282f33b6e7036d91e411f90d49ab071e9e101e08db05232d721d0c2" Nov 25 08:30:14 crc kubenswrapper[4482]: I1125 08:30:14.835229 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b6fb6ed7282f33b6e7036d91e411f90d49ab071e9e101e08db05232d721d0c2"} err="failed to get container status \"4b6fb6ed7282f33b6e7036d91e411f90d49ab071e9e101e08db05232d721d0c2\": rpc error: code = NotFound desc = could not find container \"4b6fb6ed7282f33b6e7036d91e411f90d49ab071e9e101e08db05232d721d0c2\": container with ID starting with 4b6fb6ed7282f33b6e7036d91e411f90d49ab071e9e101e08db05232d721d0c2 not found: ID does not exist" Nov 25 08:30:15 crc kubenswrapper[4482]: I1125 08:30:15.841153 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab8dac6-2fee-4b21-83d8-06dd84a1365b" path="/var/lib/kubelet/pods/3ab8dac6-2fee-4b21-83d8-06dd84a1365b/volumes" Nov 25 08:30:41 crc kubenswrapper[4482]: I1125 08:30:41.668210 4482 scope.go:117] "RemoveContainer" containerID="a7a94b36878e746b6641c9204bbce80c13a3d148db3cfe3d574abc5cdd339e5a" Nov 25 08:31:09 crc kubenswrapper[4482]: I1125 08:31:09.118118 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:31:09 crc kubenswrapper[4482]: I1125 08:31:09.118529 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:31:39 crc kubenswrapper[4482]: I1125 08:31:39.117900 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:31:39 crc kubenswrapper[4482]: I1125 08:31:39.118341 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:32:09 crc kubenswrapper[4482]: I1125 08:32:09.117769 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:32:09 crc kubenswrapper[4482]: I1125 08:32:09.118120 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:32:09 crc kubenswrapper[4482]: I1125 08:32:09.118161 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 08:32:09 crc kubenswrapper[4482]: I1125 08:32:09.118769 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:32:09 crc kubenswrapper[4482]: I1125 08:32:09.118816 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" gracePeriod=600 Nov 25 08:32:09 crc kubenswrapper[4482]: E1125 08:32:09.241024 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:32:09 crc kubenswrapper[4482]: I1125 08:32:09.605256 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" exitCode=0 Nov 25 08:32:09 crc kubenswrapper[4482]: I1125 08:32:09.605328 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9"} Nov 25 08:32:09 crc kubenswrapper[4482]: I1125 08:32:09.605489 4482 scope.go:117] "RemoveContainer" containerID="3ad1d9f4f59ae368d1166d9c47a45b3d6870f9db2645ff953452c043244018d2" Nov 25 08:32:09 crc kubenswrapper[4482]: I1125 08:32:09.606376 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:32:09 crc kubenswrapper[4482]: E1125 08:32:09.606802 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:32:23 crc kubenswrapper[4482]: I1125 08:32:23.831245 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:32:23 crc kubenswrapper[4482]: E1125 08:32:23.832037 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:32:37 crc kubenswrapper[4482]: I1125 08:32:37.830630 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:32:37 crc kubenswrapper[4482]: E1125 08:32:37.831382 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:32:43 crc kubenswrapper[4482]: I1125 08:32:43.944647 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v66ts"] Nov 25 08:32:43 crc kubenswrapper[4482]: E1125 08:32:43.946036 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab8dac6-2fee-4b21-83d8-06dd84a1365b" containerName="extract-utilities" Nov 25 08:32:43 crc kubenswrapper[4482]: I1125 08:32:43.946126 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab8dac6-2fee-4b21-83d8-06dd84a1365b" containerName="extract-utilities" Nov 25 08:32:43 crc kubenswrapper[4482]: E1125 08:32:43.946228 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab8dac6-2fee-4b21-83d8-06dd84a1365b" containerName="registry-server" Nov 25 08:32:43 crc kubenswrapper[4482]: I1125 08:32:43.946284 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab8dac6-2fee-4b21-83d8-06dd84a1365b" containerName="registry-server" Nov 25 08:32:43 crc kubenswrapper[4482]: E1125 08:32:43.946354 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab8dac6-2fee-4b21-83d8-06dd84a1365b" containerName="extract-content" Nov 25 08:32:43 crc kubenswrapper[4482]: I1125 08:32:43.946405 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab8dac6-2fee-4b21-83d8-06dd84a1365b" containerName="extract-content" Nov 25 08:32:43 crc kubenswrapper[4482]: E1125 08:32:43.946464 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441313e7-57b2-49ed-bf53-140defacbac3" containerName="collect-profiles" Nov 25 08:32:43 crc kubenswrapper[4482]: I1125 08:32:43.946514 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="441313e7-57b2-49ed-bf53-140defacbac3" containerName="collect-profiles" Nov 25 08:32:43 crc kubenswrapper[4482]: I1125 08:32:43.946732 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab8dac6-2fee-4b21-83d8-06dd84a1365b" containerName="registry-server" Nov 25 08:32:43 crc kubenswrapper[4482]: I1125 08:32:43.946806 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="441313e7-57b2-49ed-bf53-140defacbac3" containerName="collect-profiles" Nov 25 08:32:43 crc kubenswrapper[4482]: I1125 08:32:43.948458 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:43 crc kubenswrapper[4482]: I1125 08:32:43.957230 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v66ts"] Nov 25 08:32:44 crc kubenswrapper[4482]: I1125 08:32:44.023658 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08c8d209-81db-4027-90a6-f796b7c5b02c-catalog-content\") pod \"community-operators-v66ts\" (UID: \"08c8d209-81db-4027-90a6-f796b7c5b02c\") " pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:44 crc kubenswrapper[4482]: I1125 08:32:44.023746 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08c8d209-81db-4027-90a6-f796b7c5b02c-utilities\") pod \"community-operators-v66ts\" (UID: \"08c8d209-81db-4027-90a6-f796b7c5b02c\") " pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:44 crc kubenswrapper[4482]: I1125 08:32:44.023839 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxs8c\" (UniqueName: \"kubernetes.io/projected/08c8d209-81db-4027-90a6-f796b7c5b02c-kube-api-access-jxs8c\") pod \"community-operators-v66ts\" (UID: \"08c8d209-81db-4027-90a6-f796b7c5b02c\") " pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:44 crc kubenswrapper[4482]: I1125 08:32:44.125393 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxs8c\" (UniqueName: \"kubernetes.io/projected/08c8d209-81db-4027-90a6-f796b7c5b02c-kube-api-access-jxs8c\") pod \"community-operators-v66ts\" (UID: \"08c8d209-81db-4027-90a6-f796b7c5b02c\") " pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:44 crc kubenswrapper[4482]: I1125 08:32:44.125590 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08c8d209-81db-4027-90a6-f796b7c5b02c-catalog-content\") pod \"community-operators-v66ts\" (UID: \"08c8d209-81db-4027-90a6-f796b7c5b02c\") " pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:44 crc kubenswrapper[4482]: I1125 08:32:44.125712 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08c8d209-81db-4027-90a6-f796b7c5b02c-utilities\") pod \"community-operators-v66ts\" (UID: \"08c8d209-81db-4027-90a6-f796b7c5b02c\") " pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:44 crc kubenswrapper[4482]: I1125 08:32:44.126059 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08c8d209-81db-4027-90a6-f796b7c5b02c-catalog-content\") pod \"community-operators-v66ts\" (UID: \"08c8d209-81db-4027-90a6-f796b7c5b02c\") " pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:44 crc kubenswrapper[4482]: I1125 08:32:44.126093 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08c8d209-81db-4027-90a6-f796b7c5b02c-utilities\") pod \"community-operators-v66ts\" (UID: \"08c8d209-81db-4027-90a6-f796b7c5b02c\") " pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:44 crc kubenswrapper[4482]: I1125 08:32:44.142012 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxs8c\" (UniqueName: \"kubernetes.io/projected/08c8d209-81db-4027-90a6-f796b7c5b02c-kube-api-access-jxs8c\") pod \"community-operators-v66ts\" (UID: \"08c8d209-81db-4027-90a6-f796b7c5b02c\") " pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:44 crc kubenswrapper[4482]: I1125 08:32:44.269524 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:44 crc kubenswrapper[4482]: I1125 08:32:44.729992 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v66ts"] Nov 25 08:32:44 crc kubenswrapper[4482]: I1125 08:32:44.875750 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v66ts" event={"ID":"08c8d209-81db-4027-90a6-f796b7c5b02c","Type":"ContainerStarted","Data":"f545d2add7740e71347ab7027365392275c6bc8d6f222f1fd7420c3427900227"} Nov 25 08:32:45 crc kubenswrapper[4482]: I1125 08:32:45.883874 4482 generic.go:334] "Generic (PLEG): container finished" podID="08c8d209-81db-4027-90a6-f796b7c5b02c" containerID="07a4f96f5158d2c23469106109e5e3646dc75cc7f666a3442c9a8d4b28916b84" exitCode=0 Nov 25 08:32:45 crc kubenswrapper[4482]: I1125 08:32:45.883974 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v66ts" event={"ID":"08c8d209-81db-4027-90a6-f796b7c5b02c","Type":"ContainerDied","Data":"07a4f96f5158d2c23469106109e5e3646dc75cc7f666a3442c9a8d4b28916b84"} Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.342967 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5xx8v"] Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.344743 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.355675 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xx8v"] Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.365721 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27687bb4-ba1f-4be5-b66a-f2d686afde36-catalog-content\") pod \"redhat-marketplace-5xx8v\" (UID: \"27687bb4-ba1f-4be5-b66a-f2d686afde36\") " pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.365851 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27687bb4-ba1f-4be5-b66a-f2d686afde36-utilities\") pod \"redhat-marketplace-5xx8v\" (UID: \"27687bb4-ba1f-4be5-b66a-f2d686afde36\") " pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.365884 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ml9w\" (UniqueName: \"kubernetes.io/projected/27687bb4-ba1f-4be5-b66a-f2d686afde36-kube-api-access-7ml9w\") pod \"redhat-marketplace-5xx8v\" (UID: \"27687bb4-ba1f-4be5-b66a-f2d686afde36\") " pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.467897 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ml9w\" (UniqueName: \"kubernetes.io/projected/27687bb4-ba1f-4be5-b66a-f2d686afde36-kube-api-access-7ml9w\") pod \"redhat-marketplace-5xx8v\" (UID: \"27687bb4-ba1f-4be5-b66a-f2d686afde36\") " pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.468098 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27687bb4-ba1f-4be5-b66a-f2d686afde36-catalog-content\") pod \"redhat-marketplace-5xx8v\" (UID: \"27687bb4-ba1f-4be5-b66a-f2d686afde36\") " pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.468229 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27687bb4-ba1f-4be5-b66a-f2d686afde36-utilities\") pod \"redhat-marketplace-5xx8v\" (UID: \"27687bb4-ba1f-4be5-b66a-f2d686afde36\") " pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.468582 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27687bb4-ba1f-4be5-b66a-f2d686afde36-catalog-content\") pod \"redhat-marketplace-5xx8v\" (UID: \"27687bb4-ba1f-4be5-b66a-f2d686afde36\") " pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.468949 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27687bb4-ba1f-4be5-b66a-f2d686afde36-utilities\") pod \"redhat-marketplace-5xx8v\" (UID: \"27687bb4-ba1f-4be5-b66a-f2d686afde36\") " pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.485159 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ml9w\" (UniqueName: \"kubernetes.io/projected/27687bb4-ba1f-4be5-b66a-f2d686afde36-kube-api-access-7ml9w\") pod \"redhat-marketplace-5xx8v\" (UID: \"27687bb4-ba1f-4be5-b66a-f2d686afde36\") " pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.662372 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:46 crc kubenswrapper[4482]: I1125 08:32:46.893128 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v66ts" event={"ID":"08c8d209-81db-4027-90a6-f796b7c5b02c","Type":"ContainerStarted","Data":"0838de39d45deecd89a0fa4207bc74a1df9030063ffc86a497e106df6df4f309"} Nov 25 08:32:47 crc kubenswrapper[4482]: I1125 08:32:47.110455 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xx8v"] Nov 25 08:32:47 crc kubenswrapper[4482]: I1125 08:32:47.902088 4482 generic.go:334] "Generic (PLEG): container finished" podID="27687bb4-ba1f-4be5-b66a-f2d686afde36" containerID="4234308d8b6ff28440be939fcc039cd62b0aecf37638b2b40b63adad31f9e6de" exitCode=0 Nov 25 08:32:47 crc kubenswrapper[4482]: I1125 08:32:47.902162 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xx8v" event={"ID":"27687bb4-ba1f-4be5-b66a-f2d686afde36","Type":"ContainerDied","Data":"4234308d8b6ff28440be939fcc039cd62b0aecf37638b2b40b63adad31f9e6de"} Nov 25 08:32:47 crc kubenswrapper[4482]: I1125 08:32:47.902367 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xx8v" event={"ID":"27687bb4-ba1f-4be5-b66a-f2d686afde36","Type":"ContainerStarted","Data":"50b4ff0999f17a537df80e8990591cc2798989dfdf7a39dd4dc552969e122dbd"} Nov 25 08:32:47 crc kubenswrapper[4482]: I1125 08:32:47.906015 4482 generic.go:334] "Generic (PLEG): container finished" podID="08c8d209-81db-4027-90a6-f796b7c5b02c" containerID="0838de39d45deecd89a0fa4207bc74a1df9030063ffc86a497e106df6df4f309" exitCode=0 Nov 25 08:32:47 crc kubenswrapper[4482]: I1125 08:32:47.906054 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v66ts" event={"ID":"08c8d209-81db-4027-90a6-f796b7c5b02c","Type":"ContainerDied","Data":"0838de39d45deecd89a0fa4207bc74a1df9030063ffc86a497e106df6df4f309"} Nov 25 08:32:48 crc kubenswrapper[4482]: I1125 08:32:48.830069 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:32:48 crc kubenswrapper[4482]: E1125 08:32:48.830533 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:32:48 crc kubenswrapper[4482]: I1125 08:32:48.928282 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xx8v" event={"ID":"27687bb4-ba1f-4be5-b66a-f2d686afde36","Type":"ContainerStarted","Data":"d6faafe110d7681f01786d0082c2c7efedfe807e39808cb788539003460e03e4"} Nov 25 08:32:48 crc kubenswrapper[4482]: I1125 08:32:48.936025 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v66ts" event={"ID":"08c8d209-81db-4027-90a6-f796b7c5b02c","Type":"ContainerStarted","Data":"a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7"} Nov 25 08:32:48 crc kubenswrapper[4482]: I1125 08:32:48.969077 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v66ts" podStartSLOduration=3.443973243 podStartE2EDuration="5.969061073s" podCreationTimestamp="2025-11-25 08:32:43 +0000 UTC" firstStartedPulling="2025-11-25 08:32:45.886272103 +0000 UTC m=+6340.374503362" lastFinishedPulling="2025-11-25 08:32:48.411359933 +0000 UTC m=+6342.899591192" observedRunningTime="2025-11-25 08:32:48.960449552 +0000 UTC m=+6343.448680811" watchObservedRunningTime="2025-11-25 08:32:48.969061073 +0000 UTC m=+6343.457292332" Nov 25 08:32:49 crc kubenswrapper[4482]: I1125 08:32:49.952192 4482 generic.go:334] "Generic (PLEG): container finished" podID="27687bb4-ba1f-4be5-b66a-f2d686afde36" containerID="d6faafe110d7681f01786d0082c2c7efedfe807e39808cb788539003460e03e4" exitCode=0 Nov 25 08:32:49 crc kubenswrapper[4482]: I1125 08:32:49.952233 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xx8v" event={"ID":"27687bb4-ba1f-4be5-b66a-f2d686afde36","Type":"ContainerDied","Data":"d6faafe110d7681f01786d0082c2c7efedfe807e39808cb788539003460e03e4"} Nov 25 08:32:50 crc kubenswrapper[4482]: I1125 08:32:50.964194 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xx8v" event={"ID":"27687bb4-ba1f-4be5-b66a-f2d686afde36","Type":"ContainerStarted","Data":"88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4"} Nov 25 08:32:50 crc kubenswrapper[4482]: I1125 08:32:50.987727 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5xx8v" podStartSLOduration=2.441882199 podStartE2EDuration="4.987712183s" podCreationTimestamp="2025-11-25 08:32:46 +0000 UTC" firstStartedPulling="2025-11-25 08:32:47.904651561 +0000 UTC m=+6342.392882810" lastFinishedPulling="2025-11-25 08:32:50.450481535 +0000 UTC m=+6344.938712794" observedRunningTime="2025-11-25 08:32:50.97907779 +0000 UTC m=+6345.467309048" watchObservedRunningTime="2025-11-25 08:32:50.987712183 +0000 UTC m=+6345.475943442" Nov 25 08:32:54 crc kubenswrapper[4482]: I1125 08:32:54.269806 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:54 crc kubenswrapper[4482]: I1125 08:32:54.270122 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:54 crc kubenswrapper[4482]: I1125 08:32:54.310185 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:55 crc kubenswrapper[4482]: I1125 08:32:55.029980 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:56 crc kubenswrapper[4482]: I1125 08:32:56.135966 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v66ts"] Nov 25 08:32:56 crc kubenswrapper[4482]: I1125 08:32:56.663502 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:56 crc kubenswrapper[4482]: I1125 08:32:56.663740 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:56 crc kubenswrapper[4482]: I1125 08:32:56.695133 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:57 crc kubenswrapper[4482]: I1125 08:32:57.009185 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v66ts" podUID="08c8d209-81db-4027-90a6-f796b7c5b02c" containerName="registry-server" containerID="cri-o://a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7" gracePeriod=2 Nov 25 08:32:57 crc kubenswrapper[4482]: I1125 08:32:57.042711 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:32:57 crc kubenswrapper[4482]: I1125 08:32:57.420291 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:57 crc kubenswrapper[4482]: I1125 08:32:57.570538 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08c8d209-81db-4027-90a6-f796b7c5b02c-catalog-content\") pod \"08c8d209-81db-4027-90a6-f796b7c5b02c\" (UID: \"08c8d209-81db-4027-90a6-f796b7c5b02c\") " Nov 25 08:32:57 crc kubenswrapper[4482]: I1125 08:32:57.570870 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxs8c\" (UniqueName: \"kubernetes.io/projected/08c8d209-81db-4027-90a6-f796b7c5b02c-kube-api-access-jxs8c\") pod \"08c8d209-81db-4027-90a6-f796b7c5b02c\" (UID: \"08c8d209-81db-4027-90a6-f796b7c5b02c\") " Nov 25 08:32:57 crc kubenswrapper[4482]: I1125 08:32:57.571032 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08c8d209-81db-4027-90a6-f796b7c5b02c-utilities\") pod \"08c8d209-81db-4027-90a6-f796b7c5b02c\" (UID: \"08c8d209-81db-4027-90a6-f796b7c5b02c\") " Nov 25 08:32:57 crc kubenswrapper[4482]: I1125 08:32:57.571622 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08c8d209-81db-4027-90a6-f796b7c5b02c-utilities" (OuterVolumeSpecName: "utilities") pod "08c8d209-81db-4027-90a6-f796b7c5b02c" (UID: "08c8d209-81db-4027-90a6-f796b7c5b02c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:32:57 crc kubenswrapper[4482]: I1125 08:32:57.571817 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08c8d209-81db-4027-90a6-f796b7c5b02c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:57 crc kubenswrapper[4482]: I1125 08:32:57.577077 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c8d209-81db-4027-90a6-f796b7c5b02c-kube-api-access-jxs8c" (OuterVolumeSpecName: "kube-api-access-jxs8c") pod "08c8d209-81db-4027-90a6-f796b7c5b02c" (UID: "08c8d209-81db-4027-90a6-f796b7c5b02c"). InnerVolumeSpecName "kube-api-access-jxs8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:32:57 crc kubenswrapper[4482]: I1125 08:32:57.614335 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08c8d209-81db-4027-90a6-f796b7c5b02c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08c8d209-81db-4027-90a6-f796b7c5b02c" (UID: "08c8d209-81db-4027-90a6-f796b7c5b02c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:32:57 crc kubenswrapper[4482]: I1125 08:32:57.673488 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxs8c\" (UniqueName: \"kubernetes.io/projected/08c8d209-81db-4027-90a6-f796b7c5b02c-kube-api-access-jxs8c\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:57 crc kubenswrapper[4482]: I1125 08:32:57.673514 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08c8d209-81db-4027-90a6-f796b7c5b02c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.024927 4482 generic.go:334] "Generic (PLEG): container finished" podID="08c8d209-81db-4027-90a6-f796b7c5b02c" containerID="a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7" exitCode=0 Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.025007 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v66ts" event={"ID":"08c8d209-81db-4027-90a6-f796b7c5b02c","Type":"ContainerDied","Data":"a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7"} Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.025061 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v66ts" event={"ID":"08c8d209-81db-4027-90a6-f796b7c5b02c","Type":"ContainerDied","Data":"f545d2add7740e71347ab7027365392275c6bc8d6f222f1fd7420c3427900227"} Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.025080 4482 scope.go:117] "RemoveContainer" containerID="a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7" Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.025239 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v66ts" Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.043858 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v66ts"] Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.047257 4482 scope.go:117] "RemoveContainer" containerID="0838de39d45deecd89a0fa4207bc74a1df9030063ffc86a497e106df6df4f309" Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.053922 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v66ts"] Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.066326 4482 scope.go:117] "RemoveContainer" containerID="07a4f96f5158d2c23469106109e5e3646dc75cc7f666a3442c9a8d4b28916b84" Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.102239 4482 scope.go:117] "RemoveContainer" containerID="a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7" Nov 25 08:32:58 crc kubenswrapper[4482]: E1125 08:32:58.102581 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7\": container with ID starting with a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7 not found: ID does not exist" containerID="a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7" Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.102616 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7"} err="failed to get container status \"a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7\": rpc error: code = NotFound desc = could not find container \"a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7\": container with ID starting with a8679c36d8f6244ed21af5279d461a687b918f77204ccc3b92210300a52786d7 not found: ID does not exist" Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.102639 4482 scope.go:117] "RemoveContainer" containerID="0838de39d45deecd89a0fa4207bc74a1df9030063ffc86a497e106df6df4f309" Nov 25 08:32:58 crc kubenswrapper[4482]: E1125 08:32:58.102887 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0838de39d45deecd89a0fa4207bc74a1df9030063ffc86a497e106df6df4f309\": container with ID starting with 0838de39d45deecd89a0fa4207bc74a1df9030063ffc86a497e106df6df4f309 not found: ID does not exist" containerID="0838de39d45deecd89a0fa4207bc74a1df9030063ffc86a497e106df6df4f309" Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.102915 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0838de39d45deecd89a0fa4207bc74a1df9030063ffc86a497e106df6df4f309"} err="failed to get container status \"0838de39d45deecd89a0fa4207bc74a1df9030063ffc86a497e106df6df4f309\": rpc error: code = NotFound desc = could not find container \"0838de39d45deecd89a0fa4207bc74a1df9030063ffc86a497e106df6df4f309\": container with ID starting with 0838de39d45deecd89a0fa4207bc74a1df9030063ffc86a497e106df6df4f309 not found: ID does not exist" Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.102936 4482 scope.go:117] "RemoveContainer" containerID="07a4f96f5158d2c23469106109e5e3646dc75cc7f666a3442c9a8d4b28916b84" Nov 25 08:32:58 crc kubenswrapper[4482]: E1125 08:32:58.103107 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07a4f96f5158d2c23469106109e5e3646dc75cc7f666a3442c9a8d4b28916b84\": container with ID starting with 07a4f96f5158d2c23469106109e5e3646dc75cc7f666a3442c9a8d4b28916b84 not found: ID does not exist" containerID="07a4f96f5158d2c23469106109e5e3646dc75cc7f666a3442c9a8d4b28916b84" Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.103125 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a4f96f5158d2c23469106109e5e3646dc75cc7f666a3442c9a8d4b28916b84"} err="failed to get container status \"07a4f96f5158d2c23469106109e5e3646dc75cc7f666a3442c9a8d4b28916b84\": rpc error: code = NotFound desc = could not find container \"07a4f96f5158d2c23469106109e5e3646dc75cc7f666a3442c9a8d4b28916b84\": container with ID starting with 07a4f96f5158d2c23469106109e5e3646dc75cc7f666a3442c9a8d4b28916b84 not found: ID does not exist" Nov 25 08:32:58 crc kubenswrapper[4482]: I1125 08:32:58.934947 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xx8v"] Nov 25 08:32:59 crc kubenswrapper[4482]: I1125 08:32:59.838049 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c8d209-81db-4027-90a6-f796b7c5b02c" path="/var/lib/kubelet/pods/08c8d209-81db-4027-90a6-f796b7c5b02c/volumes" Nov 25 08:33:00 crc kubenswrapper[4482]: I1125 08:33:00.040729 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5xx8v" podUID="27687bb4-ba1f-4be5-b66a-f2d686afde36" containerName="registry-server" containerID="cri-o://88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4" gracePeriod=2 Nov 25 08:33:00 crc kubenswrapper[4482]: I1125 08:33:00.461077 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:33:00 crc kubenswrapper[4482]: I1125 08:33:00.625188 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27687bb4-ba1f-4be5-b66a-f2d686afde36-utilities\") pod \"27687bb4-ba1f-4be5-b66a-f2d686afde36\" (UID: \"27687bb4-ba1f-4be5-b66a-f2d686afde36\") " Nov 25 08:33:00 crc kubenswrapper[4482]: I1125 08:33:00.625300 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ml9w\" (UniqueName: \"kubernetes.io/projected/27687bb4-ba1f-4be5-b66a-f2d686afde36-kube-api-access-7ml9w\") pod \"27687bb4-ba1f-4be5-b66a-f2d686afde36\" (UID: \"27687bb4-ba1f-4be5-b66a-f2d686afde36\") " Nov 25 08:33:00 crc kubenswrapper[4482]: I1125 08:33:00.625372 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27687bb4-ba1f-4be5-b66a-f2d686afde36-catalog-content\") pod \"27687bb4-ba1f-4be5-b66a-f2d686afde36\" (UID: \"27687bb4-ba1f-4be5-b66a-f2d686afde36\") " Nov 25 08:33:00 crc kubenswrapper[4482]: I1125 08:33:00.626163 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27687bb4-ba1f-4be5-b66a-f2d686afde36-utilities" (OuterVolumeSpecName: "utilities") pod "27687bb4-ba1f-4be5-b66a-f2d686afde36" (UID: "27687bb4-ba1f-4be5-b66a-f2d686afde36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:33:00 crc kubenswrapper[4482]: I1125 08:33:00.633715 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27687bb4-ba1f-4be5-b66a-f2d686afde36-kube-api-access-7ml9w" (OuterVolumeSpecName: "kube-api-access-7ml9w") pod "27687bb4-ba1f-4be5-b66a-f2d686afde36" (UID: "27687bb4-ba1f-4be5-b66a-f2d686afde36"). InnerVolumeSpecName "kube-api-access-7ml9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:33:00 crc kubenswrapper[4482]: I1125 08:33:00.650827 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27687bb4-ba1f-4be5-b66a-f2d686afde36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27687bb4-ba1f-4be5-b66a-f2d686afde36" (UID: "27687bb4-ba1f-4be5-b66a-f2d686afde36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:33:00 crc kubenswrapper[4482]: I1125 08:33:00.728218 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27687bb4-ba1f-4be5-b66a-f2d686afde36-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:33:00 crc kubenswrapper[4482]: I1125 08:33:00.728247 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ml9w\" (UniqueName: \"kubernetes.io/projected/27687bb4-ba1f-4be5-b66a-f2d686afde36-kube-api-access-7ml9w\") on node \"crc\" DevicePath \"\"" Nov 25 08:33:00 crc kubenswrapper[4482]: I1125 08:33:00.728260 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27687bb4-ba1f-4be5-b66a-f2d686afde36-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.053120 4482 generic.go:334] "Generic (PLEG): container finished" podID="27687bb4-ba1f-4be5-b66a-f2d686afde36" containerID="88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4" exitCode=0 Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.053225 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5xx8v" Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.053220 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xx8v" event={"ID":"27687bb4-ba1f-4be5-b66a-f2d686afde36","Type":"ContainerDied","Data":"88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4"} Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.053548 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xx8v" event={"ID":"27687bb4-ba1f-4be5-b66a-f2d686afde36","Type":"ContainerDied","Data":"50b4ff0999f17a537df80e8990591cc2798989dfdf7a39dd4dc552969e122dbd"} Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.053578 4482 scope.go:117] "RemoveContainer" containerID="88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4" Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.082121 4482 scope.go:117] "RemoveContainer" containerID="d6faafe110d7681f01786d0082c2c7efedfe807e39808cb788539003460e03e4" Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.087358 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xx8v"] Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.094728 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xx8v"] Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.120917 4482 scope.go:117] "RemoveContainer" containerID="4234308d8b6ff28440be939fcc039cd62b0aecf37638b2b40b63adad31f9e6de" Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.137388 4482 scope.go:117] "RemoveContainer" containerID="88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4" Nov 25 08:33:01 crc kubenswrapper[4482]: E1125 08:33:01.137860 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4\": container with ID starting with 88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4 not found: ID does not exist" containerID="88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4" Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.137899 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4"} err="failed to get container status \"88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4\": rpc error: code = NotFound desc = could not find container \"88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4\": container with ID starting with 88fec25f0066e9a997855cdd3a07cd8c358d809ceb43a32ae8ac17524dbe69c4 not found: ID does not exist" Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.137927 4482 scope.go:117] "RemoveContainer" containerID="d6faafe110d7681f01786d0082c2c7efedfe807e39808cb788539003460e03e4" Nov 25 08:33:01 crc kubenswrapper[4482]: E1125 08:33:01.138318 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6faafe110d7681f01786d0082c2c7efedfe807e39808cb788539003460e03e4\": container with ID starting with d6faafe110d7681f01786d0082c2c7efedfe807e39808cb788539003460e03e4 not found: ID does not exist" containerID="d6faafe110d7681f01786d0082c2c7efedfe807e39808cb788539003460e03e4" Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.138370 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6faafe110d7681f01786d0082c2c7efedfe807e39808cb788539003460e03e4"} err="failed to get container status \"d6faafe110d7681f01786d0082c2c7efedfe807e39808cb788539003460e03e4\": rpc error: code = NotFound desc = could not find container \"d6faafe110d7681f01786d0082c2c7efedfe807e39808cb788539003460e03e4\": container with ID starting with d6faafe110d7681f01786d0082c2c7efedfe807e39808cb788539003460e03e4 not found: ID does not exist" Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.138396 4482 scope.go:117] "RemoveContainer" containerID="4234308d8b6ff28440be939fcc039cd62b0aecf37638b2b40b63adad31f9e6de" Nov 25 08:33:01 crc kubenswrapper[4482]: E1125 08:33:01.138807 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4234308d8b6ff28440be939fcc039cd62b0aecf37638b2b40b63adad31f9e6de\": container with ID starting with 4234308d8b6ff28440be939fcc039cd62b0aecf37638b2b40b63adad31f9e6de not found: ID does not exist" containerID="4234308d8b6ff28440be939fcc039cd62b0aecf37638b2b40b63adad31f9e6de" Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.138842 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4234308d8b6ff28440be939fcc039cd62b0aecf37638b2b40b63adad31f9e6de"} err="failed to get container status \"4234308d8b6ff28440be939fcc039cd62b0aecf37638b2b40b63adad31f9e6de\": rpc error: code = NotFound desc = could not find container \"4234308d8b6ff28440be939fcc039cd62b0aecf37638b2b40b63adad31f9e6de\": container with ID starting with 4234308d8b6ff28440be939fcc039cd62b0aecf37638b2b40b63adad31f9e6de not found: ID does not exist" Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.830549 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:33:01 crc kubenswrapper[4482]: E1125 08:33:01.830912 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:33:01 crc kubenswrapper[4482]: I1125 08:33:01.838587 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27687bb4-ba1f-4be5-b66a-f2d686afde36" path="/var/lib/kubelet/pods/27687bb4-ba1f-4be5-b66a-f2d686afde36/volumes" Nov 25 08:33:12 crc kubenswrapper[4482]: I1125 08:33:12.831045 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:33:12 crc kubenswrapper[4482]: E1125 08:33:12.832007 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:33:26 crc kubenswrapper[4482]: I1125 08:33:26.831077 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:33:26 crc kubenswrapper[4482]: E1125 08:33:26.832146 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:33:39 crc kubenswrapper[4482]: I1125 08:33:39.831355 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:33:39 crc kubenswrapper[4482]: E1125 08:33:39.832022 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:33:50 crc kubenswrapper[4482]: I1125 08:33:50.831011 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:33:50 crc kubenswrapper[4482]: E1125 08:33:50.832469 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:34:02 crc kubenswrapper[4482]: I1125 08:34:02.830811 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:34:02 crc kubenswrapper[4482]: E1125 08:34:02.831439 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:34:13 crc kubenswrapper[4482]: I1125 08:34:13.830886 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:34:13 crc kubenswrapper[4482]: E1125 08:34:13.831400 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:34:26 crc kubenswrapper[4482]: I1125 08:34:26.831012 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:34:26 crc kubenswrapper[4482]: E1125 08:34:26.831609 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:34:38 crc kubenswrapper[4482]: I1125 08:34:38.830450 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:34:38 crc kubenswrapper[4482]: E1125 08:34:38.831077 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:34:53 crc kubenswrapper[4482]: I1125 08:34:53.831069 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:34:53 crc kubenswrapper[4482]: E1125 08:34:53.833064 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:35:06 crc kubenswrapper[4482]: I1125 08:35:06.830840 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:35:06 crc kubenswrapper[4482]: E1125 08:35:06.831578 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:35:20 crc kubenswrapper[4482]: I1125 08:35:20.831580 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:35:20 crc kubenswrapper[4482]: E1125 08:35:20.832995 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:35:33 crc kubenswrapper[4482]: I1125 08:35:33.831748 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:35:33 crc kubenswrapper[4482]: E1125 08:35:33.832828 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.057890 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-844v6"] Nov 25 08:35:34 crc kubenswrapper[4482]: E1125 08:35:34.058705 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c8d209-81db-4027-90a6-f796b7c5b02c" containerName="extract-content" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.058736 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c8d209-81db-4027-90a6-f796b7c5b02c" containerName="extract-content" Nov 25 08:35:34 crc kubenswrapper[4482]: E1125 08:35:34.058760 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27687bb4-ba1f-4be5-b66a-f2d686afde36" containerName="extract-utilities" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.058769 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="27687bb4-ba1f-4be5-b66a-f2d686afde36" containerName="extract-utilities" Nov 25 08:35:34 crc kubenswrapper[4482]: E1125 08:35:34.058796 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27687bb4-ba1f-4be5-b66a-f2d686afde36" containerName="extract-content" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.058805 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="27687bb4-ba1f-4be5-b66a-f2d686afde36" containerName="extract-content" Nov 25 08:35:34 crc kubenswrapper[4482]: E1125 08:35:34.058837 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27687bb4-ba1f-4be5-b66a-f2d686afde36" containerName="registry-server" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.058845 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="27687bb4-ba1f-4be5-b66a-f2d686afde36" containerName="registry-server" Nov 25 08:35:34 crc kubenswrapper[4482]: E1125 08:35:34.058858 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c8d209-81db-4027-90a6-f796b7c5b02c" containerName="registry-server" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.058866 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c8d209-81db-4027-90a6-f796b7c5b02c" containerName="registry-server" Nov 25 08:35:34 crc kubenswrapper[4482]: E1125 08:35:34.058891 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c8d209-81db-4027-90a6-f796b7c5b02c" containerName="extract-utilities" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.058897 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c8d209-81db-4027-90a6-f796b7c5b02c" containerName="extract-utilities" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.059164 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="27687bb4-ba1f-4be5-b66a-f2d686afde36" containerName="registry-server" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.059211 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c8d209-81db-4027-90a6-f796b7c5b02c" containerName="registry-server" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.061042 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.079746 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-844v6"] Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.134635 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11963f1f-142a-4494-8d09-5c37d0becbb7-utilities\") pod \"redhat-operators-844v6\" (UID: \"11963f1f-142a-4494-8d09-5c37d0becbb7\") " pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.134883 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11963f1f-142a-4494-8d09-5c37d0becbb7-catalog-content\") pod \"redhat-operators-844v6\" (UID: \"11963f1f-142a-4494-8d09-5c37d0becbb7\") " pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.134922 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq6sr\" (UniqueName: \"kubernetes.io/projected/11963f1f-142a-4494-8d09-5c37d0becbb7-kube-api-access-nq6sr\") pod \"redhat-operators-844v6\" (UID: \"11963f1f-142a-4494-8d09-5c37d0becbb7\") " pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.236234 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11963f1f-142a-4494-8d09-5c37d0becbb7-utilities\") pod \"redhat-operators-844v6\" (UID: \"11963f1f-142a-4494-8d09-5c37d0becbb7\") " pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.236352 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11963f1f-142a-4494-8d09-5c37d0becbb7-catalog-content\") pod \"redhat-operators-844v6\" (UID: \"11963f1f-142a-4494-8d09-5c37d0becbb7\") " pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.236380 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq6sr\" (UniqueName: \"kubernetes.io/projected/11963f1f-142a-4494-8d09-5c37d0becbb7-kube-api-access-nq6sr\") pod \"redhat-operators-844v6\" (UID: \"11963f1f-142a-4494-8d09-5c37d0becbb7\") " pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.236683 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11963f1f-142a-4494-8d09-5c37d0becbb7-utilities\") pod \"redhat-operators-844v6\" (UID: \"11963f1f-142a-4494-8d09-5c37d0becbb7\") " pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.236728 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11963f1f-142a-4494-8d09-5c37d0becbb7-catalog-content\") pod \"redhat-operators-844v6\" (UID: \"11963f1f-142a-4494-8d09-5c37d0becbb7\") " pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.259258 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq6sr\" (UniqueName: \"kubernetes.io/projected/11963f1f-142a-4494-8d09-5c37d0becbb7-kube-api-access-nq6sr\") pod \"redhat-operators-844v6\" (UID: \"11963f1f-142a-4494-8d09-5c37d0becbb7\") " pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.392037 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:34 crc kubenswrapper[4482]: I1125 08:35:34.835422 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-844v6"] Nov 25 08:35:35 crc kubenswrapper[4482]: I1125 08:35:35.141442 4482 generic.go:334] "Generic (PLEG): container finished" podID="11963f1f-142a-4494-8d09-5c37d0becbb7" containerID="9a208d6739a4f6e87c5e7232bd4398bfc2585f3c269043431acc48945960f349" exitCode=0 Nov 25 08:35:35 crc kubenswrapper[4482]: I1125 08:35:35.141488 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-844v6" event={"ID":"11963f1f-142a-4494-8d09-5c37d0becbb7","Type":"ContainerDied","Data":"9a208d6739a4f6e87c5e7232bd4398bfc2585f3c269043431acc48945960f349"} Nov 25 08:35:35 crc kubenswrapper[4482]: I1125 08:35:35.141530 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-844v6" event={"ID":"11963f1f-142a-4494-8d09-5c37d0becbb7","Type":"ContainerStarted","Data":"ae8b0a3ea267bbd3b8c111049541d718f5b9b49d2393f878c9c8ed59fbba5d9e"} Nov 25 08:35:35 crc kubenswrapper[4482]: I1125 08:35:35.143851 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:35:36 crc kubenswrapper[4482]: I1125 08:35:36.161836 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-844v6" event={"ID":"11963f1f-142a-4494-8d09-5c37d0becbb7","Type":"ContainerStarted","Data":"2ea91c0974876ba66c9ccfd7972826338e39a74d7d5c75df860774e9164a0a98"} Nov 25 08:35:38 crc kubenswrapper[4482]: I1125 08:35:38.182114 4482 generic.go:334] "Generic (PLEG): container finished" podID="11963f1f-142a-4494-8d09-5c37d0becbb7" containerID="2ea91c0974876ba66c9ccfd7972826338e39a74d7d5c75df860774e9164a0a98" exitCode=0 Nov 25 08:35:38 crc kubenswrapper[4482]: I1125 08:35:38.182160 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-844v6" event={"ID":"11963f1f-142a-4494-8d09-5c37d0becbb7","Type":"ContainerDied","Data":"2ea91c0974876ba66c9ccfd7972826338e39a74d7d5c75df860774e9164a0a98"} Nov 25 08:35:39 crc kubenswrapper[4482]: I1125 08:35:39.195085 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-844v6" event={"ID":"11963f1f-142a-4494-8d09-5c37d0becbb7","Type":"ContainerStarted","Data":"c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104"} Nov 25 08:35:39 crc kubenswrapper[4482]: I1125 08:35:39.218097 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-844v6" podStartSLOduration=1.7050070640000001 podStartE2EDuration="5.218074672s" podCreationTimestamp="2025-11-25 08:35:34 +0000 UTC" firstStartedPulling="2025-11-25 08:35:35.143620447 +0000 UTC m=+6509.631851706" lastFinishedPulling="2025-11-25 08:35:38.656688065 +0000 UTC m=+6513.144919314" observedRunningTime="2025-11-25 08:35:39.21287375 +0000 UTC m=+6513.701105009" watchObservedRunningTime="2025-11-25 08:35:39.218074672 +0000 UTC m=+6513.706305931" Nov 25 08:35:44 crc kubenswrapper[4482]: I1125 08:35:44.392843 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:44 crc kubenswrapper[4482]: I1125 08:35:44.393417 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:45 crc kubenswrapper[4482]: I1125 08:35:45.431097 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-844v6" podUID="11963f1f-142a-4494-8d09-5c37d0becbb7" containerName="registry-server" probeResult="failure" output=< Nov 25 08:35:45 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 08:35:45 crc kubenswrapper[4482]: > Nov 25 08:35:46 crc kubenswrapper[4482]: I1125 08:35:46.831203 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:35:46 crc kubenswrapper[4482]: E1125 08:35:46.831920 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:35:54 crc kubenswrapper[4482]: I1125 08:35:54.430563 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:54 crc kubenswrapper[4482]: I1125 08:35:54.475949 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:54 crc kubenswrapper[4482]: I1125 08:35:54.672052 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-844v6"] Nov 25 08:35:56 crc kubenswrapper[4482]: I1125 08:35:56.345643 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-844v6" podUID="11963f1f-142a-4494-8d09-5c37d0becbb7" containerName="registry-server" containerID="cri-o://c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104" gracePeriod=2 Nov 25 08:35:56 crc kubenswrapper[4482]: I1125 08:35:56.721661 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:56 crc kubenswrapper[4482]: I1125 08:35:56.922701 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11963f1f-142a-4494-8d09-5c37d0becbb7-catalog-content\") pod \"11963f1f-142a-4494-8d09-5c37d0becbb7\" (UID: \"11963f1f-142a-4494-8d09-5c37d0becbb7\") " Nov 25 08:35:56 crc kubenswrapper[4482]: I1125 08:35:56.923868 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11963f1f-142a-4494-8d09-5c37d0becbb7-utilities\") pod \"11963f1f-142a-4494-8d09-5c37d0becbb7\" (UID: \"11963f1f-142a-4494-8d09-5c37d0becbb7\") " Nov 25 08:35:56 crc kubenswrapper[4482]: I1125 08:35:56.924638 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq6sr\" (UniqueName: \"kubernetes.io/projected/11963f1f-142a-4494-8d09-5c37d0becbb7-kube-api-access-nq6sr\") pod \"11963f1f-142a-4494-8d09-5c37d0becbb7\" (UID: \"11963f1f-142a-4494-8d09-5c37d0becbb7\") " Nov 25 08:35:56 crc kubenswrapper[4482]: I1125 08:35:56.924567 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11963f1f-142a-4494-8d09-5c37d0becbb7-utilities" (OuterVolumeSpecName: "utilities") pod "11963f1f-142a-4494-8d09-5c37d0becbb7" (UID: "11963f1f-142a-4494-8d09-5c37d0becbb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:35:56 crc kubenswrapper[4482]: I1125 08:35:56.936089 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11963f1f-142a-4494-8d09-5c37d0becbb7-kube-api-access-nq6sr" (OuterVolumeSpecName: "kube-api-access-nq6sr") pod "11963f1f-142a-4494-8d09-5c37d0becbb7" (UID: "11963f1f-142a-4494-8d09-5c37d0becbb7"). InnerVolumeSpecName "kube-api-access-nq6sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:35:56 crc kubenswrapper[4482]: I1125 08:35:56.987107 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11963f1f-142a-4494-8d09-5c37d0becbb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11963f1f-142a-4494-8d09-5c37d0becbb7" (UID: "11963f1f-142a-4494-8d09-5c37d0becbb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.028084 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11963f1f-142a-4494-8d09-5c37d0becbb7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.028114 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11963f1f-142a-4494-8d09-5c37d0becbb7-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.028124 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq6sr\" (UniqueName: \"kubernetes.io/projected/11963f1f-142a-4494-8d09-5c37d0becbb7-kube-api-access-nq6sr\") on node \"crc\" DevicePath \"\"" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.357420 4482 generic.go:334] "Generic (PLEG): container finished" podID="11963f1f-142a-4494-8d09-5c37d0becbb7" containerID="c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104" exitCode=0 Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.357467 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-844v6" event={"ID":"11963f1f-142a-4494-8d09-5c37d0becbb7","Type":"ContainerDied","Data":"c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104"} Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.357495 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-844v6" event={"ID":"11963f1f-142a-4494-8d09-5c37d0becbb7","Type":"ContainerDied","Data":"ae8b0a3ea267bbd3b8c111049541d718f5b9b49d2393f878c9c8ed59fbba5d9e"} Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.357492 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-844v6" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.357509 4482 scope.go:117] "RemoveContainer" containerID="c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.378748 4482 scope.go:117] "RemoveContainer" containerID="2ea91c0974876ba66c9ccfd7972826338e39a74d7d5c75df860774e9164a0a98" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.403782 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-844v6"] Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.410555 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-844v6"] Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.426065 4482 scope.go:117] "RemoveContainer" containerID="9a208d6739a4f6e87c5e7232bd4398bfc2585f3c269043431acc48945960f349" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.460855 4482 scope.go:117] "RemoveContainer" containerID="c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104" Nov 25 08:35:57 crc kubenswrapper[4482]: E1125 08:35:57.461802 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104\": container with ID starting with c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104 not found: ID does not exist" containerID="c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.461836 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104"} err="failed to get container status \"c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104\": rpc error: code = NotFound desc = could not find container \"c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104\": container with ID starting with c4464577e6b2f5607c49df5c82f0a3ce767fced4e099947226e3f30d0a173104 not found: ID does not exist" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.461860 4482 scope.go:117] "RemoveContainer" containerID="2ea91c0974876ba66c9ccfd7972826338e39a74d7d5c75df860774e9164a0a98" Nov 25 08:35:57 crc kubenswrapper[4482]: E1125 08:35:57.462321 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea91c0974876ba66c9ccfd7972826338e39a74d7d5c75df860774e9164a0a98\": container with ID starting with 2ea91c0974876ba66c9ccfd7972826338e39a74d7d5c75df860774e9164a0a98 not found: ID does not exist" containerID="2ea91c0974876ba66c9ccfd7972826338e39a74d7d5c75df860774e9164a0a98" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.462358 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea91c0974876ba66c9ccfd7972826338e39a74d7d5c75df860774e9164a0a98"} err="failed to get container status \"2ea91c0974876ba66c9ccfd7972826338e39a74d7d5c75df860774e9164a0a98\": rpc error: code = NotFound desc = could not find container \"2ea91c0974876ba66c9ccfd7972826338e39a74d7d5c75df860774e9164a0a98\": container with ID starting with 2ea91c0974876ba66c9ccfd7972826338e39a74d7d5c75df860774e9164a0a98 not found: ID does not exist" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.462385 4482 scope.go:117] "RemoveContainer" containerID="9a208d6739a4f6e87c5e7232bd4398bfc2585f3c269043431acc48945960f349" Nov 25 08:35:57 crc kubenswrapper[4482]: E1125 08:35:57.462825 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a208d6739a4f6e87c5e7232bd4398bfc2585f3c269043431acc48945960f349\": container with ID starting with 9a208d6739a4f6e87c5e7232bd4398bfc2585f3c269043431acc48945960f349 not found: ID does not exist" containerID="9a208d6739a4f6e87c5e7232bd4398bfc2585f3c269043431acc48945960f349" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.462879 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a208d6739a4f6e87c5e7232bd4398bfc2585f3c269043431acc48945960f349"} err="failed to get container status \"9a208d6739a4f6e87c5e7232bd4398bfc2585f3c269043431acc48945960f349\": rpc error: code = NotFound desc = could not find container \"9a208d6739a4f6e87c5e7232bd4398bfc2585f3c269043431acc48945960f349\": container with ID starting with 9a208d6739a4f6e87c5e7232bd4398bfc2585f3c269043431acc48945960f349 not found: ID does not exist" Nov 25 08:35:57 crc kubenswrapper[4482]: I1125 08:35:57.850097 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11963f1f-142a-4494-8d09-5c37d0becbb7" path="/var/lib/kubelet/pods/11963f1f-142a-4494-8d09-5c37d0becbb7/volumes" Nov 25 08:36:01 crc kubenswrapper[4482]: I1125 08:36:01.831048 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:36:01 crc kubenswrapper[4482]: E1125 08:36:01.831838 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:36:16 crc kubenswrapper[4482]: I1125 08:36:16.830255 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:36:16 crc kubenswrapper[4482]: E1125 08:36:16.831018 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:36:27 crc kubenswrapper[4482]: I1125 08:36:27.830474 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:36:27 crc kubenswrapper[4482]: E1125 08:36:27.831333 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:36:40 crc kubenswrapper[4482]: I1125 08:36:40.831242 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:36:40 crc kubenswrapper[4482]: E1125 08:36:40.832081 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:36:51 crc kubenswrapper[4482]: I1125 08:36:51.830930 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:36:51 crc kubenswrapper[4482]: E1125 08:36:51.832610 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:37:04 crc kubenswrapper[4482]: I1125 08:37:04.831093 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:37:04 crc kubenswrapper[4482]: E1125 08:37:04.831977 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:37:19 crc kubenswrapper[4482]: I1125 08:37:19.831110 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:37:19 crc kubenswrapper[4482]: I1125 08:37:19.990843 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"3e552ba4e30bd2f721d5ecc535e7afb3328dcb62525aa3c8f581aa0679f4f91d"} Nov 25 08:39:39 crc kubenswrapper[4482]: I1125 08:39:39.117684 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:39:39 crc kubenswrapper[4482]: I1125 08:39:39.118593 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:40:09 crc kubenswrapper[4482]: I1125 08:40:09.118048 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:40:09 crc kubenswrapper[4482]: I1125 08:40:09.119554 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:40:39 crc kubenswrapper[4482]: I1125 08:40:39.117832 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:40:39 crc kubenswrapper[4482]: I1125 08:40:39.118429 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:40:39 crc kubenswrapper[4482]: I1125 08:40:39.118473 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 08:40:39 crc kubenswrapper[4482]: I1125 08:40:39.119257 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e552ba4e30bd2f721d5ecc535e7afb3328dcb62525aa3c8f581aa0679f4f91d"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:40:39 crc kubenswrapper[4482]: I1125 08:40:39.119307 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://3e552ba4e30bd2f721d5ecc535e7afb3328dcb62525aa3c8f581aa0679f4f91d" gracePeriod=600 Nov 25 08:40:39 crc kubenswrapper[4482]: I1125 08:40:39.488487 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="3e552ba4e30bd2f721d5ecc535e7afb3328dcb62525aa3c8f581aa0679f4f91d" exitCode=0 Nov 25 08:40:39 crc kubenswrapper[4482]: I1125 08:40:39.488798 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"3e552ba4e30bd2f721d5ecc535e7afb3328dcb62525aa3c8f581aa0679f4f91d"} Nov 25 08:40:39 crc kubenswrapper[4482]: I1125 08:40:39.488833 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514"} Nov 25 08:40:39 crc kubenswrapper[4482]: I1125 08:40:39.488849 4482 scope.go:117] "RemoveContainer" containerID="e056f6eb2e74f7723a380880583e21159c277e05c04ced50ca3526597518b0b9" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.624942 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6c2nl"] Nov 25 08:41:58 crc kubenswrapper[4482]: E1125 08:41:58.625612 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11963f1f-142a-4494-8d09-5c37d0becbb7" containerName="extract-content" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.625624 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="11963f1f-142a-4494-8d09-5c37d0becbb7" containerName="extract-content" Nov 25 08:41:58 crc kubenswrapper[4482]: E1125 08:41:58.625642 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11963f1f-142a-4494-8d09-5c37d0becbb7" containerName="registry-server" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.625648 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="11963f1f-142a-4494-8d09-5c37d0becbb7" containerName="registry-server" Nov 25 08:41:58 crc kubenswrapper[4482]: E1125 08:41:58.625669 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11963f1f-142a-4494-8d09-5c37d0becbb7" containerName="extract-utilities" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.625674 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="11963f1f-142a-4494-8d09-5c37d0becbb7" containerName="extract-utilities" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.625880 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="11963f1f-142a-4494-8d09-5c37d0becbb7" containerName="registry-server" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.627015 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.634888 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6c2nl"] Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.689962 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6j4k\" (UniqueName: \"kubernetes.io/projected/1e21f30a-3e73-44e4-b69d-50979d0ba875-kube-api-access-m6j4k\") pod \"certified-operators-6c2nl\" (UID: \"1e21f30a-3e73-44e4-b69d-50979d0ba875\") " pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.690039 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e21f30a-3e73-44e4-b69d-50979d0ba875-catalog-content\") pod \"certified-operators-6c2nl\" (UID: \"1e21f30a-3e73-44e4-b69d-50979d0ba875\") " pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.690106 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e21f30a-3e73-44e4-b69d-50979d0ba875-utilities\") pod \"certified-operators-6c2nl\" (UID: \"1e21f30a-3e73-44e4-b69d-50979d0ba875\") " pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.791418 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e21f30a-3e73-44e4-b69d-50979d0ba875-utilities\") pod \"certified-operators-6c2nl\" (UID: \"1e21f30a-3e73-44e4-b69d-50979d0ba875\") " pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.791740 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6j4k\" (UniqueName: \"kubernetes.io/projected/1e21f30a-3e73-44e4-b69d-50979d0ba875-kube-api-access-m6j4k\") pod \"certified-operators-6c2nl\" (UID: \"1e21f30a-3e73-44e4-b69d-50979d0ba875\") " pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.791919 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e21f30a-3e73-44e4-b69d-50979d0ba875-utilities\") pod \"certified-operators-6c2nl\" (UID: \"1e21f30a-3e73-44e4-b69d-50979d0ba875\") " pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.791922 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e21f30a-3e73-44e4-b69d-50979d0ba875-catalog-content\") pod \"certified-operators-6c2nl\" (UID: \"1e21f30a-3e73-44e4-b69d-50979d0ba875\") " pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.792290 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e21f30a-3e73-44e4-b69d-50979d0ba875-catalog-content\") pod \"certified-operators-6c2nl\" (UID: \"1e21f30a-3e73-44e4-b69d-50979d0ba875\") " pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.815106 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6j4k\" (UniqueName: \"kubernetes.io/projected/1e21f30a-3e73-44e4-b69d-50979d0ba875-kube-api-access-m6j4k\") pod \"certified-operators-6c2nl\" (UID: \"1e21f30a-3e73-44e4-b69d-50979d0ba875\") " pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:41:58 crc kubenswrapper[4482]: I1125 08:41:58.942573 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:41:59 crc kubenswrapper[4482]: I1125 08:41:59.387378 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6c2nl"] Nov 25 08:42:00 crc kubenswrapper[4482]: I1125 08:42:00.051401 4482 generic.go:334] "Generic (PLEG): container finished" podID="1e21f30a-3e73-44e4-b69d-50979d0ba875" containerID="1f8767c5d74d7d519c870e5a0a23d4592585a2bb2a1a8a751afa7a711180d0cd" exitCode=0 Nov 25 08:42:00 crc kubenswrapper[4482]: I1125 08:42:00.051610 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c2nl" event={"ID":"1e21f30a-3e73-44e4-b69d-50979d0ba875","Type":"ContainerDied","Data":"1f8767c5d74d7d519c870e5a0a23d4592585a2bb2a1a8a751afa7a711180d0cd"} Nov 25 08:42:00 crc kubenswrapper[4482]: I1125 08:42:00.051633 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c2nl" event={"ID":"1e21f30a-3e73-44e4-b69d-50979d0ba875","Type":"ContainerStarted","Data":"82c7fd38210544a7b1832d56dee899ad7bcb1f0ed25b848f1be8f697ad4ef206"} Nov 25 08:42:00 crc kubenswrapper[4482]: I1125 08:42:00.053437 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:42:01 crc kubenswrapper[4482]: I1125 08:42:01.059932 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c2nl" event={"ID":"1e21f30a-3e73-44e4-b69d-50979d0ba875","Type":"ContainerStarted","Data":"c3ddfbe8342f13ed9bf7975633e19efe2af403c3ec2e1382ce68e574584f37a0"} Nov 25 08:42:02 crc kubenswrapper[4482]: I1125 08:42:02.067381 4482 generic.go:334] "Generic (PLEG): container finished" podID="1e21f30a-3e73-44e4-b69d-50979d0ba875" containerID="c3ddfbe8342f13ed9bf7975633e19efe2af403c3ec2e1382ce68e574584f37a0" exitCode=0 Nov 25 08:42:02 crc kubenswrapper[4482]: I1125 08:42:02.067423 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c2nl" event={"ID":"1e21f30a-3e73-44e4-b69d-50979d0ba875","Type":"ContainerDied","Data":"c3ddfbe8342f13ed9bf7975633e19efe2af403c3ec2e1382ce68e574584f37a0"} Nov 25 08:42:03 crc kubenswrapper[4482]: I1125 08:42:03.075639 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c2nl" event={"ID":"1e21f30a-3e73-44e4-b69d-50979d0ba875","Type":"ContainerStarted","Data":"0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e"} Nov 25 08:42:03 crc kubenswrapper[4482]: I1125 08:42:03.091725 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6c2nl" podStartSLOduration=2.459240519 podStartE2EDuration="5.091712874s" podCreationTimestamp="2025-11-25 08:41:58 +0000 UTC" firstStartedPulling="2025-11-25 08:42:00.053223179 +0000 UTC m=+6894.541454438" lastFinishedPulling="2025-11-25 08:42:02.685695533 +0000 UTC m=+6897.173926793" observedRunningTime="2025-11-25 08:42:03.089193185 +0000 UTC m=+6897.577424464" watchObservedRunningTime="2025-11-25 08:42:03.091712874 +0000 UTC m=+6897.579944133" Nov 25 08:42:08 crc kubenswrapper[4482]: I1125 08:42:08.943593 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:42:08 crc kubenswrapper[4482]: I1125 08:42:08.944028 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:42:08 crc kubenswrapper[4482]: I1125 08:42:08.996999 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:42:09 crc kubenswrapper[4482]: I1125 08:42:09.153092 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:42:09 crc kubenswrapper[4482]: I1125 08:42:09.228959 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6c2nl"] Nov 25 08:42:11 crc kubenswrapper[4482]: I1125 08:42:11.124865 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6c2nl" podUID="1e21f30a-3e73-44e4-b69d-50979d0ba875" containerName="registry-server" containerID="cri-o://0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e" gracePeriod=2 Nov 25 08:42:11 crc kubenswrapper[4482]: I1125 08:42:11.560102 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:42:11 crc kubenswrapper[4482]: I1125 08:42:11.714595 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e21f30a-3e73-44e4-b69d-50979d0ba875-catalog-content\") pod \"1e21f30a-3e73-44e4-b69d-50979d0ba875\" (UID: \"1e21f30a-3e73-44e4-b69d-50979d0ba875\") " Nov 25 08:42:11 crc kubenswrapper[4482]: I1125 08:42:11.714665 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e21f30a-3e73-44e4-b69d-50979d0ba875-utilities\") pod \"1e21f30a-3e73-44e4-b69d-50979d0ba875\" (UID: \"1e21f30a-3e73-44e4-b69d-50979d0ba875\") " Nov 25 08:42:11 crc kubenswrapper[4482]: I1125 08:42:11.714767 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6j4k\" (UniqueName: \"kubernetes.io/projected/1e21f30a-3e73-44e4-b69d-50979d0ba875-kube-api-access-m6j4k\") pod \"1e21f30a-3e73-44e4-b69d-50979d0ba875\" (UID: \"1e21f30a-3e73-44e4-b69d-50979d0ba875\") " Nov 25 08:42:11 crc kubenswrapper[4482]: I1125 08:42:11.715083 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e21f30a-3e73-44e4-b69d-50979d0ba875-utilities" (OuterVolumeSpecName: "utilities") pod "1e21f30a-3e73-44e4-b69d-50979d0ba875" (UID: "1e21f30a-3e73-44e4-b69d-50979d0ba875"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:42:11 crc kubenswrapper[4482]: I1125 08:42:11.715463 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e21f30a-3e73-44e4-b69d-50979d0ba875-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:42:11 crc kubenswrapper[4482]: I1125 08:42:11.724071 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e21f30a-3e73-44e4-b69d-50979d0ba875-kube-api-access-m6j4k" (OuterVolumeSpecName: "kube-api-access-m6j4k") pod "1e21f30a-3e73-44e4-b69d-50979d0ba875" (UID: "1e21f30a-3e73-44e4-b69d-50979d0ba875"). InnerVolumeSpecName "kube-api-access-m6j4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:42:11 crc kubenswrapper[4482]: I1125 08:42:11.757897 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e21f30a-3e73-44e4-b69d-50979d0ba875-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e21f30a-3e73-44e4-b69d-50979d0ba875" (UID: "1e21f30a-3e73-44e4-b69d-50979d0ba875"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:42:11 crc kubenswrapper[4482]: I1125 08:42:11.817311 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e21f30a-3e73-44e4-b69d-50979d0ba875-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:42:11 crc kubenswrapper[4482]: I1125 08:42:11.817456 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6j4k\" (UniqueName: \"kubernetes.io/projected/1e21f30a-3e73-44e4-b69d-50979d0ba875-kube-api-access-m6j4k\") on node \"crc\" DevicePath \"\"" Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.134080 4482 generic.go:334] "Generic (PLEG): container finished" podID="1e21f30a-3e73-44e4-b69d-50979d0ba875" containerID="0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e" exitCode=0 Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.134117 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c2nl" event={"ID":"1e21f30a-3e73-44e4-b69d-50979d0ba875","Type":"ContainerDied","Data":"0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e"} Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.134143 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6c2nl" event={"ID":"1e21f30a-3e73-44e4-b69d-50979d0ba875","Type":"ContainerDied","Data":"82c7fd38210544a7b1832d56dee899ad7bcb1f0ed25b848f1be8f697ad4ef206"} Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.134160 4482 scope.go:117] "RemoveContainer" containerID="0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e" Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.134839 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6c2nl" Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.151568 4482 scope.go:117] "RemoveContainer" containerID="c3ddfbe8342f13ed9bf7975633e19efe2af403c3ec2e1382ce68e574584f37a0" Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.153732 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6c2nl"] Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.160595 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6c2nl"] Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.171344 4482 scope.go:117] "RemoveContainer" containerID="1f8767c5d74d7d519c870e5a0a23d4592585a2bb2a1a8a751afa7a711180d0cd" Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.208133 4482 scope.go:117] "RemoveContainer" containerID="0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e" Nov 25 08:42:12 crc kubenswrapper[4482]: E1125 08:42:12.208499 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e\": container with ID starting with 0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e not found: ID does not exist" containerID="0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e" Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.208528 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e"} err="failed to get container status \"0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e\": rpc error: code = NotFound desc = could not find container \"0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e\": container with ID starting with 0ecf91dab1715952f5ed21cac116807310628d78c64c0a2e30f49bdb72cd086e not found: ID does not exist" Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.208545 4482 scope.go:117] "RemoveContainer" containerID="c3ddfbe8342f13ed9bf7975633e19efe2af403c3ec2e1382ce68e574584f37a0" Nov 25 08:42:12 crc kubenswrapper[4482]: E1125 08:42:12.208930 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3ddfbe8342f13ed9bf7975633e19efe2af403c3ec2e1382ce68e574584f37a0\": container with ID starting with c3ddfbe8342f13ed9bf7975633e19efe2af403c3ec2e1382ce68e574584f37a0 not found: ID does not exist" containerID="c3ddfbe8342f13ed9bf7975633e19efe2af403c3ec2e1382ce68e574584f37a0" Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.208948 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ddfbe8342f13ed9bf7975633e19efe2af403c3ec2e1382ce68e574584f37a0"} err="failed to get container status \"c3ddfbe8342f13ed9bf7975633e19efe2af403c3ec2e1382ce68e574584f37a0\": rpc error: code = NotFound desc = could not find container \"c3ddfbe8342f13ed9bf7975633e19efe2af403c3ec2e1382ce68e574584f37a0\": container with ID starting with c3ddfbe8342f13ed9bf7975633e19efe2af403c3ec2e1382ce68e574584f37a0 not found: ID does not exist" Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.208964 4482 scope.go:117] "RemoveContainer" containerID="1f8767c5d74d7d519c870e5a0a23d4592585a2bb2a1a8a751afa7a711180d0cd" Nov 25 08:42:12 crc kubenswrapper[4482]: E1125 08:42:12.209259 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f8767c5d74d7d519c870e5a0a23d4592585a2bb2a1a8a751afa7a711180d0cd\": container with ID starting with 1f8767c5d74d7d519c870e5a0a23d4592585a2bb2a1a8a751afa7a711180d0cd not found: ID does not exist" containerID="1f8767c5d74d7d519c870e5a0a23d4592585a2bb2a1a8a751afa7a711180d0cd" Nov 25 08:42:12 crc kubenswrapper[4482]: I1125 08:42:12.209303 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f8767c5d74d7d519c870e5a0a23d4592585a2bb2a1a8a751afa7a711180d0cd"} err="failed to get container status \"1f8767c5d74d7d519c870e5a0a23d4592585a2bb2a1a8a751afa7a711180d0cd\": rpc error: code = NotFound desc = could not find container \"1f8767c5d74d7d519c870e5a0a23d4592585a2bb2a1a8a751afa7a711180d0cd\": container with ID starting with 1f8767c5d74d7d519c870e5a0a23d4592585a2bb2a1a8a751afa7a711180d0cd not found: ID does not exist" Nov 25 08:42:13 crc kubenswrapper[4482]: I1125 08:42:13.838742 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e21f30a-3e73-44e4-b69d-50979d0ba875" path="/var/lib/kubelet/pods/1e21f30a-3e73-44e4-b69d-50979d0ba875/volumes" Nov 25 08:42:39 crc kubenswrapper[4482]: I1125 08:42:39.117238 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:42:39 crc kubenswrapper[4482]: I1125 08:42:39.117600 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.288955 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-97wlw"] Nov 25 08:43:01 crc kubenswrapper[4482]: E1125 08:43:01.289757 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e21f30a-3e73-44e4-b69d-50979d0ba875" containerName="registry-server" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.289770 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e21f30a-3e73-44e4-b69d-50979d0ba875" containerName="registry-server" Nov 25 08:43:01 crc kubenswrapper[4482]: E1125 08:43:01.289798 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e21f30a-3e73-44e4-b69d-50979d0ba875" containerName="extract-content" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.289804 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e21f30a-3e73-44e4-b69d-50979d0ba875" containerName="extract-content" Nov 25 08:43:01 crc kubenswrapper[4482]: E1125 08:43:01.289819 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e21f30a-3e73-44e4-b69d-50979d0ba875" containerName="extract-utilities" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.289825 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e21f30a-3e73-44e4-b69d-50979d0ba875" containerName="extract-utilities" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.290025 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e21f30a-3e73-44e4-b69d-50979d0ba875" containerName="registry-server" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.291347 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.296422 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-97wlw"] Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.441979 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14ad32dc-ce12-4be0-8759-cee4c8ceb9cf-catalog-content\") pod \"community-operators-97wlw\" (UID: \"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf\") " pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.442194 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sftd5\" (UniqueName: \"kubernetes.io/projected/14ad32dc-ce12-4be0-8759-cee4c8ceb9cf-kube-api-access-sftd5\") pod \"community-operators-97wlw\" (UID: \"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf\") " pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.442300 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14ad32dc-ce12-4be0-8759-cee4c8ceb9cf-utilities\") pod \"community-operators-97wlw\" (UID: \"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf\") " pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.543966 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14ad32dc-ce12-4be0-8759-cee4c8ceb9cf-catalog-content\") pod \"community-operators-97wlw\" (UID: \"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf\") " pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.544017 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sftd5\" (UniqueName: \"kubernetes.io/projected/14ad32dc-ce12-4be0-8759-cee4c8ceb9cf-kube-api-access-sftd5\") pod \"community-operators-97wlw\" (UID: \"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf\") " pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.544044 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14ad32dc-ce12-4be0-8759-cee4c8ceb9cf-utilities\") pod \"community-operators-97wlw\" (UID: \"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf\") " pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.544504 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14ad32dc-ce12-4be0-8759-cee4c8ceb9cf-utilities\") pod \"community-operators-97wlw\" (UID: \"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf\") " pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.544641 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14ad32dc-ce12-4be0-8759-cee4c8ceb9cf-catalog-content\") pod \"community-operators-97wlw\" (UID: \"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf\") " pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.563613 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sftd5\" (UniqueName: \"kubernetes.io/projected/14ad32dc-ce12-4be0-8759-cee4c8ceb9cf-kube-api-access-sftd5\") pod \"community-operators-97wlw\" (UID: \"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf\") " pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:01 crc kubenswrapper[4482]: I1125 08:43:01.608356 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:02 crc kubenswrapper[4482]: I1125 08:43:02.181643 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-97wlw"] Nov 25 08:43:02 crc kubenswrapper[4482]: I1125 08:43:02.477541 4482 generic.go:334] "Generic (PLEG): container finished" podID="14ad32dc-ce12-4be0-8759-cee4c8ceb9cf" containerID="e0e0f91faa0c91e477006d57c8cf600ba38cb0cb4af86c44094ef4397d5690c6" exitCode=0 Nov 25 08:43:02 crc kubenswrapper[4482]: I1125 08:43:02.477584 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97wlw" event={"ID":"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf","Type":"ContainerDied","Data":"e0e0f91faa0c91e477006d57c8cf600ba38cb0cb4af86c44094ef4397d5690c6"} Nov 25 08:43:02 crc kubenswrapper[4482]: I1125 08:43:02.477742 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97wlw" event={"ID":"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf","Type":"ContainerStarted","Data":"8e071c374b99b60853f817540f04a58595d169aeb29f2c7c16ec9b7f1de88530"} Nov 25 08:43:07 crc kubenswrapper[4482]: I1125 08:43:07.511867 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97wlw" event={"ID":"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf","Type":"ContainerStarted","Data":"0af32fd3824e038668ba4a715d96d0bf34af17252865ce7b7409da1c9dde7ac4"} Nov 25 08:43:08 crc kubenswrapper[4482]: I1125 08:43:08.518831 4482 generic.go:334] "Generic (PLEG): container finished" podID="14ad32dc-ce12-4be0-8759-cee4c8ceb9cf" containerID="0af32fd3824e038668ba4a715d96d0bf34af17252865ce7b7409da1c9dde7ac4" exitCode=0 Nov 25 08:43:08 crc kubenswrapper[4482]: I1125 08:43:08.518867 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97wlw" event={"ID":"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf","Type":"ContainerDied","Data":"0af32fd3824e038668ba4a715d96d0bf34af17252865ce7b7409da1c9dde7ac4"} Nov 25 08:43:09 crc kubenswrapper[4482]: I1125 08:43:09.117356 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:43:09 crc kubenswrapper[4482]: I1125 08:43:09.117529 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:43:09 crc kubenswrapper[4482]: I1125 08:43:09.527014 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97wlw" event={"ID":"14ad32dc-ce12-4be0-8759-cee4c8ceb9cf","Type":"ContainerStarted","Data":"0363939b1114461ade9312dc88018aa14daf55b1b5192331546e622cf37a2329"} Nov 25 08:43:09 crc kubenswrapper[4482]: I1125 08:43:09.546158 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-97wlw" podStartSLOduration=2.04947735 podStartE2EDuration="8.546142946s" podCreationTimestamp="2025-11-25 08:43:01 +0000 UTC" firstStartedPulling="2025-11-25 08:43:02.478723488 +0000 UTC m=+6956.966954747" lastFinishedPulling="2025-11-25 08:43:08.975389084 +0000 UTC m=+6963.463620343" observedRunningTime="2025-11-25 08:43:09.545705932 +0000 UTC m=+6964.033937191" watchObservedRunningTime="2025-11-25 08:43:09.546142946 +0000 UTC m=+6964.034374205" Nov 25 08:43:11 crc kubenswrapper[4482]: I1125 08:43:11.609436 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:11 crc kubenswrapper[4482]: I1125 08:43:11.609654 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:11 crc kubenswrapper[4482]: I1125 08:43:11.644098 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.272672 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rb2z2"] Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.275105 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.286053 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb2z2"] Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.369652 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-catalog-content\") pod \"redhat-marketplace-rb2z2\" (UID: \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\") " pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.369731 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjz94\" (UniqueName: \"kubernetes.io/projected/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-kube-api-access-vjz94\") pod \"redhat-marketplace-rb2z2\" (UID: \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\") " pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.369804 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-utilities\") pod \"redhat-marketplace-rb2z2\" (UID: \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\") " pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.471254 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-catalog-content\") pod \"redhat-marketplace-rb2z2\" (UID: \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\") " pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.471328 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjz94\" (UniqueName: \"kubernetes.io/projected/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-kube-api-access-vjz94\") pod \"redhat-marketplace-rb2z2\" (UID: \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\") " pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.471383 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-utilities\") pod \"redhat-marketplace-rb2z2\" (UID: \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\") " pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.471927 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-catalog-content\") pod \"redhat-marketplace-rb2z2\" (UID: \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\") " pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.472028 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-utilities\") pod \"redhat-marketplace-rb2z2\" (UID: \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\") " pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.495990 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjz94\" (UniqueName: \"kubernetes.io/projected/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-kube-api-access-vjz94\") pod \"redhat-marketplace-rb2z2\" (UID: \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\") " pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:15 crc kubenswrapper[4482]: I1125 08:43:15.589997 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:16 crc kubenswrapper[4482]: I1125 08:43:16.099482 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb2z2"] Nov 25 08:43:16 crc kubenswrapper[4482]: I1125 08:43:16.579671 4482 generic.go:334] "Generic (PLEG): container finished" podID="94b5e893-7dad-4b48-a2d6-6a6df3c65df0" containerID="7994fd0fa736a970d019bd6030a7bcf56df4054e6cd1f45998d2aee4610c8e3b" exitCode=0 Nov 25 08:43:16 crc kubenswrapper[4482]: I1125 08:43:16.579880 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb2z2" event={"ID":"94b5e893-7dad-4b48-a2d6-6a6df3c65df0","Type":"ContainerDied","Data":"7994fd0fa736a970d019bd6030a7bcf56df4054e6cd1f45998d2aee4610c8e3b"} Nov 25 08:43:16 crc kubenswrapper[4482]: I1125 08:43:16.579929 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb2z2" event={"ID":"94b5e893-7dad-4b48-a2d6-6a6df3c65df0","Type":"ContainerStarted","Data":"053a072d99f872fa000eced8a592c155f45753e318e32f3e11e7bf0c50b9b5d6"} Nov 25 08:43:17 crc kubenswrapper[4482]: I1125 08:43:17.589057 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb2z2" event={"ID":"94b5e893-7dad-4b48-a2d6-6a6df3c65df0","Type":"ContainerStarted","Data":"8d0652222525326e7e8949db21bcfd586d3d1bc8e4b8db7f07f021379b5ab419"} Nov 25 08:43:18 crc kubenswrapper[4482]: I1125 08:43:18.597123 4482 generic.go:334] "Generic (PLEG): container finished" podID="94b5e893-7dad-4b48-a2d6-6a6df3c65df0" containerID="8d0652222525326e7e8949db21bcfd586d3d1bc8e4b8db7f07f021379b5ab419" exitCode=0 Nov 25 08:43:18 crc kubenswrapper[4482]: I1125 08:43:18.597206 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb2z2" event={"ID":"94b5e893-7dad-4b48-a2d6-6a6df3c65df0","Type":"ContainerDied","Data":"8d0652222525326e7e8949db21bcfd586d3d1bc8e4b8db7f07f021379b5ab419"} Nov 25 08:43:19 crc kubenswrapper[4482]: I1125 08:43:19.605683 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb2z2" event={"ID":"94b5e893-7dad-4b48-a2d6-6a6df3c65df0","Type":"ContainerStarted","Data":"08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a"} Nov 25 08:43:21 crc kubenswrapper[4482]: I1125 08:43:21.642131 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-97wlw" Nov 25 08:43:21 crc kubenswrapper[4482]: I1125 08:43:21.660458 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rb2z2" podStartSLOduration=4.194610556 podStartE2EDuration="6.660441539s" podCreationTimestamp="2025-11-25 08:43:15 +0000 UTC" firstStartedPulling="2025-11-25 08:43:16.58187519 +0000 UTC m=+6971.070106450" lastFinishedPulling="2025-11-25 08:43:19.047706173 +0000 UTC m=+6973.535937433" observedRunningTime="2025-11-25 08:43:19.624195075 +0000 UTC m=+6974.112426334" watchObservedRunningTime="2025-11-25 08:43:21.660441539 +0000 UTC m=+6976.148672798" Nov 25 08:43:22 crc kubenswrapper[4482]: I1125 08:43:22.482027 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-97wlw"] Nov 25 08:43:22 crc kubenswrapper[4482]: I1125 08:43:22.663230 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-df5lj"] Nov 25 08:43:22 crc kubenswrapper[4482]: I1125 08:43:22.664468 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-df5lj" podUID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" containerName="registry-server" containerID="cri-o://f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718" gracePeriod=2 Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.124389 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-df5lj" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.195634 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w6p5\" (UniqueName: \"kubernetes.io/projected/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-kube-api-access-5w6p5\") pod \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\" (UID: \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\") " Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.195885 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-utilities\") pod \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\" (UID: \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\") " Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.196041 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-catalog-content\") pod \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\" (UID: \"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6\") " Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.198143 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-utilities" (OuterVolumeSpecName: "utilities") pod "2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" (UID: "2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.206331 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-kube-api-access-5w6p5" (OuterVolumeSpecName: "kube-api-access-5w6p5") pod "2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" (UID: "2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6"). InnerVolumeSpecName "kube-api-access-5w6p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.272996 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" (UID: "2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.298148 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.298188 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.298200 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w6p5\" (UniqueName: \"kubernetes.io/projected/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6-kube-api-access-5w6p5\") on node \"crc\" DevicePath \"\"" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.633119 4482 generic.go:334] "Generic (PLEG): container finished" podID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" containerID="f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718" exitCode=0 Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.633156 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df5lj" event={"ID":"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6","Type":"ContainerDied","Data":"f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718"} Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.633207 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-df5lj" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.633223 4482 scope.go:117] "RemoveContainer" containerID="f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.633212 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-df5lj" event={"ID":"2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6","Type":"ContainerDied","Data":"b6a40ff613409486d0c93d1eab8ffc997ba6ae66cea054feb80ed0d75230cc08"} Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.659602 4482 scope.go:117] "RemoveContainer" containerID="8d0d8de706ace8b577e155cd56782f6ed9dc37db69c843f1b5d36365c9c0044d" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.661323 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-df5lj"] Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.672151 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-df5lj"] Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.687836 4482 scope.go:117] "RemoveContainer" containerID="56b89a2e475a97968960990ecce3b2dadde08ceff6dfbacff2a06d60c243af30" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.717861 4482 scope.go:117] "RemoveContainer" containerID="f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718" Nov 25 08:43:23 crc kubenswrapper[4482]: E1125 08:43:23.718391 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718\": container with ID starting with f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718 not found: ID does not exist" containerID="f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.718431 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718"} err="failed to get container status \"f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718\": rpc error: code = NotFound desc = could not find container \"f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718\": container with ID starting with f32cbccf3e53d5c1fd59f95ef075707b363f29fa4e7568e834d12633ad3c2718 not found: ID does not exist" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.718452 4482 scope.go:117] "RemoveContainer" containerID="8d0d8de706ace8b577e155cd56782f6ed9dc37db69c843f1b5d36365c9c0044d" Nov 25 08:43:23 crc kubenswrapper[4482]: E1125 08:43:23.720399 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0d8de706ace8b577e155cd56782f6ed9dc37db69c843f1b5d36365c9c0044d\": container with ID starting with 8d0d8de706ace8b577e155cd56782f6ed9dc37db69c843f1b5d36365c9c0044d not found: ID does not exist" containerID="8d0d8de706ace8b577e155cd56782f6ed9dc37db69c843f1b5d36365c9c0044d" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.720424 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0d8de706ace8b577e155cd56782f6ed9dc37db69c843f1b5d36365c9c0044d"} err="failed to get container status \"8d0d8de706ace8b577e155cd56782f6ed9dc37db69c843f1b5d36365c9c0044d\": rpc error: code = NotFound desc = could not find container \"8d0d8de706ace8b577e155cd56782f6ed9dc37db69c843f1b5d36365c9c0044d\": container with ID starting with 8d0d8de706ace8b577e155cd56782f6ed9dc37db69c843f1b5d36365c9c0044d not found: ID does not exist" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.720438 4482 scope.go:117] "RemoveContainer" containerID="56b89a2e475a97968960990ecce3b2dadde08ceff6dfbacff2a06d60c243af30" Nov 25 08:43:23 crc kubenswrapper[4482]: E1125 08:43:23.720716 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56b89a2e475a97968960990ecce3b2dadde08ceff6dfbacff2a06d60c243af30\": container with ID starting with 56b89a2e475a97968960990ecce3b2dadde08ceff6dfbacff2a06d60c243af30 not found: ID does not exist" containerID="56b89a2e475a97968960990ecce3b2dadde08ceff6dfbacff2a06d60c243af30" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.720737 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b89a2e475a97968960990ecce3b2dadde08ceff6dfbacff2a06d60c243af30"} err="failed to get container status \"56b89a2e475a97968960990ecce3b2dadde08ceff6dfbacff2a06d60c243af30\": rpc error: code = NotFound desc = could not find container \"56b89a2e475a97968960990ecce3b2dadde08ceff6dfbacff2a06d60c243af30\": container with ID starting with 56b89a2e475a97968960990ecce3b2dadde08ceff6dfbacff2a06d60c243af30 not found: ID does not exist" Nov 25 08:43:23 crc kubenswrapper[4482]: I1125 08:43:23.838956 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" path="/var/lib/kubelet/pods/2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6/volumes" Nov 25 08:43:25 crc kubenswrapper[4482]: I1125 08:43:25.590417 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:25 crc kubenswrapper[4482]: I1125 08:43:25.590634 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:25 crc kubenswrapper[4482]: I1125 08:43:25.663572 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:25 crc kubenswrapper[4482]: I1125 08:43:25.698572 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:27 crc kubenswrapper[4482]: I1125 08:43:27.862506 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb2z2"] Nov 25 08:43:27 crc kubenswrapper[4482]: I1125 08:43:27.862914 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rb2z2" podUID="94b5e893-7dad-4b48-a2d6-6a6df3c65df0" containerName="registry-server" containerID="cri-o://08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a" gracePeriod=2 Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.280551 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.377780 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-catalog-content\") pod \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\" (UID: \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\") " Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.377823 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjz94\" (UniqueName: \"kubernetes.io/projected/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-kube-api-access-vjz94\") pod \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\" (UID: \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\") " Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.378018 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-utilities\") pod \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\" (UID: \"94b5e893-7dad-4b48-a2d6-6a6df3c65df0\") " Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.378592 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-utilities" (OuterVolumeSpecName: "utilities") pod "94b5e893-7dad-4b48-a2d6-6a6df3c65df0" (UID: "94b5e893-7dad-4b48-a2d6-6a6df3c65df0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.382677 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-kube-api-access-vjz94" (OuterVolumeSpecName: "kube-api-access-vjz94") pod "94b5e893-7dad-4b48-a2d6-6a6df3c65df0" (UID: "94b5e893-7dad-4b48-a2d6-6a6df3c65df0"). InnerVolumeSpecName "kube-api-access-vjz94". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.391971 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94b5e893-7dad-4b48-a2d6-6a6df3c65df0" (UID: "94b5e893-7dad-4b48-a2d6-6a6df3c65df0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.479499 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.479517 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjz94\" (UniqueName: \"kubernetes.io/projected/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-kube-api-access-vjz94\") on node \"crc\" DevicePath \"\"" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.479528 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94b5e893-7dad-4b48-a2d6-6a6df3c65df0-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.669832 4482 generic.go:334] "Generic (PLEG): container finished" podID="94b5e893-7dad-4b48-a2d6-6a6df3c65df0" containerID="08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a" exitCode=0 Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.669908 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb2z2" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.669924 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb2z2" event={"ID":"94b5e893-7dad-4b48-a2d6-6a6df3c65df0","Type":"ContainerDied","Data":"08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a"} Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.670242 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb2z2" event={"ID":"94b5e893-7dad-4b48-a2d6-6a6df3c65df0","Type":"ContainerDied","Data":"053a072d99f872fa000eced8a592c155f45753e318e32f3e11e7bf0c50b9b5d6"} Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.670293 4482 scope.go:117] "RemoveContainer" containerID="08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.688334 4482 scope.go:117] "RemoveContainer" containerID="8d0652222525326e7e8949db21bcfd586d3d1bc8e4b8db7f07f021379b5ab419" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.697270 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb2z2"] Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.701860 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb2z2"] Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.706738 4482 scope.go:117] "RemoveContainer" containerID="7994fd0fa736a970d019bd6030a7bcf56df4054e6cd1f45998d2aee4610c8e3b" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.740084 4482 scope.go:117] "RemoveContainer" containerID="08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a" Nov 25 08:43:28 crc kubenswrapper[4482]: E1125 08:43:28.740454 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a\": container with ID starting with 08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a not found: ID does not exist" containerID="08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.740488 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a"} err="failed to get container status \"08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a\": rpc error: code = NotFound desc = could not find container \"08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a\": container with ID starting with 08193a3ace966c2941215711a6439bc0c2c1368a5cbe3162e5a770038054863a not found: ID does not exist" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.740511 4482 scope.go:117] "RemoveContainer" containerID="8d0652222525326e7e8949db21bcfd586d3d1bc8e4b8db7f07f021379b5ab419" Nov 25 08:43:28 crc kubenswrapper[4482]: E1125 08:43:28.740813 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0652222525326e7e8949db21bcfd586d3d1bc8e4b8db7f07f021379b5ab419\": container with ID starting with 8d0652222525326e7e8949db21bcfd586d3d1bc8e4b8db7f07f021379b5ab419 not found: ID does not exist" containerID="8d0652222525326e7e8949db21bcfd586d3d1bc8e4b8db7f07f021379b5ab419" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.740833 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0652222525326e7e8949db21bcfd586d3d1bc8e4b8db7f07f021379b5ab419"} err="failed to get container status \"8d0652222525326e7e8949db21bcfd586d3d1bc8e4b8db7f07f021379b5ab419\": rpc error: code = NotFound desc = could not find container \"8d0652222525326e7e8949db21bcfd586d3d1bc8e4b8db7f07f021379b5ab419\": container with ID starting with 8d0652222525326e7e8949db21bcfd586d3d1bc8e4b8db7f07f021379b5ab419 not found: ID does not exist" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.740846 4482 scope.go:117] "RemoveContainer" containerID="7994fd0fa736a970d019bd6030a7bcf56df4054e6cd1f45998d2aee4610c8e3b" Nov 25 08:43:28 crc kubenswrapper[4482]: E1125 08:43:28.741082 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7994fd0fa736a970d019bd6030a7bcf56df4054e6cd1f45998d2aee4610c8e3b\": container with ID starting with 7994fd0fa736a970d019bd6030a7bcf56df4054e6cd1f45998d2aee4610c8e3b not found: ID does not exist" containerID="7994fd0fa736a970d019bd6030a7bcf56df4054e6cd1f45998d2aee4610c8e3b" Nov 25 08:43:28 crc kubenswrapper[4482]: I1125 08:43:28.741110 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7994fd0fa736a970d019bd6030a7bcf56df4054e6cd1f45998d2aee4610c8e3b"} err="failed to get container status \"7994fd0fa736a970d019bd6030a7bcf56df4054e6cd1f45998d2aee4610c8e3b\": rpc error: code = NotFound desc = could not find container \"7994fd0fa736a970d019bd6030a7bcf56df4054e6cd1f45998d2aee4610c8e3b\": container with ID starting with 7994fd0fa736a970d019bd6030a7bcf56df4054e6cd1f45998d2aee4610c8e3b not found: ID does not exist" Nov 25 08:43:29 crc kubenswrapper[4482]: I1125 08:43:29.840841 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94b5e893-7dad-4b48-a2d6-6a6df3c65df0" path="/var/lib/kubelet/pods/94b5e893-7dad-4b48-a2d6-6a6df3c65df0/volumes" Nov 25 08:43:39 crc kubenswrapper[4482]: I1125 08:43:39.117119 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:43:39 crc kubenswrapper[4482]: I1125 08:43:39.117482 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:43:39 crc kubenswrapper[4482]: I1125 08:43:39.117514 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 08:43:39 crc kubenswrapper[4482]: I1125 08:43:39.117968 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:43:39 crc kubenswrapper[4482]: I1125 08:43:39.118016 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" gracePeriod=600 Nov 25 08:43:39 crc kubenswrapper[4482]: E1125 08:43:39.232552 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:43:39 crc kubenswrapper[4482]: I1125 08:43:39.747295 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" exitCode=0 Nov 25 08:43:39 crc kubenswrapper[4482]: I1125 08:43:39.747334 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514"} Nov 25 08:43:39 crc kubenswrapper[4482]: I1125 08:43:39.747364 4482 scope.go:117] "RemoveContainer" containerID="3e552ba4e30bd2f721d5ecc535e7afb3328dcb62525aa3c8f581aa0679f4f91d" Nov 25 08:43:39 crc kubenswrapper[4482]: I1125 08:43:39.748111 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:43:39 crc kubenswrapper[4482]: E1125 08:43:39.748379 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:43:52 crc kubenswrapper[4482]: I1125 08:43:52.831389 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:43:52 crc kubenswrapper[4482]: E1125 08:43:52.832159 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:44:04 crc kubenswrapper[4482]: I1125 08:44:04.831489 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:44:04 crc kubenswrapper[4482]: E1125 08:44:04.832098 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:44:18 crc kubenswrapper[4482]: I1125 08:44:18.831336 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:44:18 crc kubenswrapper[4482]: E1125 08:44:18.831960 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:44:31 crc kubenswrapper[4482]: I1125 08:44:31.830530 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:44:31 crc kubenswrapper[4482]: E1125 08:44:31.831063 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:44:43 crc kubenswrapper[4482]: I1125 08:44:43.830785 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:44:43 crc kubenswrapper[4482]: E1125 08:44:43.831913 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:44:55 crc kubenswrapper[4482]: I1125 08:44:55.838780 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:44:55 crc kubenswrapper[4482]: E1125 08:44:55.840432 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.136856 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs"] Nov 25 08:45:00 crc kubenswrapper[4482]: E1125 08:45:00.142315 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" containerName="extract-content" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.142534 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" containerName="extract-content" Nov 25 08:45:00 crc kubenswrapper[4482]: E1125 08:45:00.142623 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b5e893-7dad-4b48-a2d6-6a6df3c65df0" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.142780 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b5e893-7dad-4b48-a2d6-6a6df3c65df0" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[4482]: E1125 08:45:00.142872 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b5e893-7dad-4b48-a2d6-6a6df3c65df0" containerName="extract-content" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.142945 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b5e893-7dad-4b48-a2d6-6a6df3c65df0" containerName="extract-content" Nov 25 08:45:00 crc kubenswrapper[4482]: E1125 08:45:00.143105 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b5e893-7dad-4b48-a2d6-6a6df3c65df0" containerName="extract-utilities" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.143189 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b5e893-7dad-4b48-a2d6-6a6df3c65df0" containerName="extract-utilities" Nov 25 08:45:00 crc kubenswrapper[4482]: E1125 08:45:00.143273 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" containerName="extract-utilities" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.143349 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" containerName="extract-utilities" Nov 25 08:45:00 crc kubenswrapper[4482]: E1125 08:45:00.143443 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.143599 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.144045 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf83f3e-f08e-4d0b-9cdb-1fda380ec2c6" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.144250 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b5e893-7dad-4b48-a2d6-6a6df3c65df0" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.145832 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.153804 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs"] Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.155389 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.159484 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.325677 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk62d\" (UniqueName: \"kubernetes.io/projected/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-kube-api-access-lk62d\") pod \"collect-profiles-29401005-6dwfs\" (UID: \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.326019 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-secret-volume\") pod \"collect-profiles-29401005-6dwfs\" (UID: \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.326070 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-config-volume\") pod \"collect-profiles-29401005-6dwfs\" (UID: \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.427329 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk62d\" (UniqueName: \"kubernetes.io/projected/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-kube-api-access-lk62d\") pod \"collect-profiles-29401005-6dwfs\" (UID: \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.427382 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-secret-volume\") pod \"collect-profiles-29401005-6dwfs\" (UID: \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.427436 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-config-volume\") pod \"collect-profiles-29401005-6dwfs\" (UID: \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.428244 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-config-volume\") pod \"collect-profiles-29401005-6dwfs\" (UID: \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.435835 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-secret-volume\") pod \"collect-profiles-29401005-6dwfs\" (UID: \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.442653 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk62d\" (UniqueName: \"kubernetes.io/projected/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-kube-api-access-lk62d\") pod \"collect-profiles-29401005-6dwfs\" (UID: \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.475520 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:00 crc kubenswrapper[4482]: I1125 08:45:00.879785 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs"] Nov 25 08:45:01 crc kubenswrapper[4482]: I1125 08:45:01.272277 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" event={"ID":"a366ff32-01c6-4bf7-bbb9-ad3374ba644c","Type":"ContainerStarted","Data":"aac1c3023d223c4ae45c64a58995f1f6bac5b3b27910abf2aa7efab7bb7e5cd3"} Nov 25 08:45:01 crc kubenswrapper[4482]: I1125 08:45:01.272529 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" event={"ID":"a366ff32-01c6-4bf7-bbb9-ad3374ba644c","Type":"ContainerStarted","Data":"f4682431671392c661a8e304d0bd2e3f0a056bf52c9100280262a6e0538ee648"} Nov 25 08:45:01 crc kubenswrapper[4482]: I1125 08:45:01.294815 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" podStartSLOduration=1.2947982279999999 podStartE2EDuration="1.294798228s" podCreationTimestamp="2025-11-25 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:45:01.293711419 +0000 UTC m=+7075.781942678" watchObservedRunningTime="2025-11-25 08:45:01.294798228 +0000 UTC m=+7075.783029487" Nov 25 08:45:02 crc kubenswrapper[4482]: I1125 08:45:02.279510 4482 generic.go:334] "Generic (PLEG): container finished" podID="a366ff32-01c6-4bf7-bbb9-ad3374ba644c" containerID="aac1c3023d223c4ae45c64a58995f1f6bac5b3b27910abf2aa7efab7bb7e5cd3" exitCode=0 Nov 25 08:45:02 crc kubenswrapper[4482]: I1125 08:45:02.279590 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" event={"ID":"a366ff32-01c6-4bf7-bbb9-ad3374ba644c","Type":"ContainerDied","Data":"aac1c3023d223c4ae45c64a58995f1f6bac5b3b27910abf2aa7efab7bb7e5cd3"} Nov 25 08:45:03 crc kubenswrapper[4482]: I1125 08:45:03.616415 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:03 crc kubenswrapper[4482]: I1125 08:45:03.782711 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-config-volume\") pod \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\" (UID: \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\") " Nov 25 08:45:03 crc kubenswrapper[4482]: I1125 08:45:03.782867 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-secret-volume\") pod \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\" (UID: \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\") " Nov 25 08:45:03 crc kubenswrapper[4482]: I1125 08:45:03.782939 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk62d\" (UniqueName: \"kubernetes.io/projected/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-kube-api-access-lk62d\") pod \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\" (UID: \"a366ff32-01c6-4bf7-bbb9-ad3374ba644c\") " Nov 25 08:45:03 crc kubenswrapper[4482]: I1125 08:45:03.783406 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-config-volume" (OuterVolumeSpecName: "config-volume") pod "a366ff32-01c6-4bf7-bbb9-ad3374ba644c" (UID: "a366ff32-01c6-4bf7-bbb9-ad3374ba644c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:45:03 crc kubenswrapper[4482]: I1125 08:45:03.783628 4482 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:03 crc kubenswrapper[4482]: I1125 08:45:03.792298 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a366ff32-01c6-4bf7-bbb9-ad3374ba644c" (UID: "a366ff32-01c6-4bf7-bbb9-ad3374ba644c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:45:03 crc kubenswrapper[4482]: I1125 08:45:03.792438 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-kube-api-access-lk62d" (OuterVolumeSpecName: "kube-api-access-lk62d") pod "a366ff32-01c6-4bf7-bbb9-ad3374ba644c" (UID: "a366ff32-01c6-4bf7-bbb9-ad3374ba644c"). InnerVolumeSpecName "kube-api-access-lk62d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:45:03 crc kubenswrapper[4482]: I1125 08:45:03.885845 4482 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:03 crc kubenswrapper[4482]: I1125 08:45:03.885872 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk62d\" (UniqueName: \"kubernetes.io/projected/a366ff32-01c6-4bf7-bbb9-ad3374ba644c-kube-api-access-lk62d\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:04 crc kubenswrapper[4482]: I1125 08:45:04.294424 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" event={"ID":"a366ff32-01c6-4bf7-bbb9-ad3374ba644c","Type":"ContainerDied","Data":"f4682431671392c661a8e304d0bd2e3f0a056bf52c9100280262a6e0538ee648"} Nov 25 08:45:04 crc kubenswrapper[4482]: I1125 08:45:04.294463 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4682431671392c661a8e304d0bd2e3f0a056bf52c9100280262a6e0538ee648" Nov 25 08:45:04 crc kubenswrapper[4482]: I1125 08:45:04.294483 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-6dwfs" Nov 25 08:45:04 crc kubenswrapper[4482]: I1125 08:45:04.693774 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7"] Nov 25 08:45:04 crc kubenswrapper[4482]: I1125 08:45:04.700148 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-g7xr7"] Nov 25 08:45:05 crc kubenswrapper[4482]: I1125 08:45:05.848399 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f7109a3-bb73-4533-bb5b-c7e52179326d" path="/var/lib/kubelet/pods/3f7109a3-bb73-4533-bb5b-c7e52179326d/volumes" Nov 25 08:45:10 crc kubenswrapper[4482]: I1125 08:45:10.831493 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:45:10 crc kubenswrapper[4482]: E1125 08:45:10.832294 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:45:24 crc kubenswrapper[4482]: I1125 08:45:24.830639 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:45:24 crc kubenswrapper[4482]: E1125 08:45:24.831349 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:45:37 crc kubenswrapper[4482]: I1125 08:45:37.831618 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:45:37 crc kubenswrapper[4482]: E1125 08:45:37.832193 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:45:42 crc kubenswrapper[4482]: I1125 08:45:42.025252 4482 scope.go:117] "RemoveContainer" containerID="3c8248dae98b89986c8c8c30988aba4f455b7c60d263af6bb75e71389bfdda25" Nov 25 08:45:52 crc kubenswrapper[4482]: I1125 08:45:52.830891 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:45:52 crc kubenswrapper[4482]: E1125 08:45:52.831454 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:46:04 crc kubenswrapper[4482]: I1125 08:46:04.830892 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:46:04 crc kubenswrapper[4482]: E1125 08:46:04.831400 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.139330 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5h6qp"] Nov 25 08:46:09 crc kubenswrapper[4482]: E1125 08:46:09.140103 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a366ff32-01c6-4bf7-bbb9-ad3374ba644c" containerName="collect-profiles" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.140117 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="a366ff32-01c6-4bf7-bbb9-ad3374ba644c" containerName="collect-profiles" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.140610 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="a366ff32-01c6-4bf7-bbb9-ad3374ba644c" containerName="collect-profiles" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.146881 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.206034 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5h6qp"] Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.225013 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-catalog-content\") pod \"redhat-operators-5h6qp\" (UID: \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\") " pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.225052 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-utilities\") pod \"redhat-operators-5h6qp\" (UID: \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\") " pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.225125 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwc5j\" (UniqueName: \"kubernetes.io/projected/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-kube-api-access-cwc5j\") pod \"redhat-operators-5h6qp\" (UID: \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\") " pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.326985 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-catalog-content\") pod \"redhat-operators-5h6qp\" (UID: \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\") " pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.327024 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-utilities\") pod \"redhat-operators-5h6qp\" (UID: \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\") " pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.327072 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwc5j\" (UniqueName: \"kubernetes.io/projected/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-kube-api-access-cwc5j\") pod \"redhat-operators-5h6qp\" (UID: \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\") " pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.327502 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-catalog-content\") pod \"redhat-operators-5h6qp\" (UID: \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\") " pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.327738 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-utilities\") pod \"redhat-operators-5h6qp\" (UID: \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\") " pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.367238 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwc5j\" (UniqueName: \"kubernetes.io/projected/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-kube-api-access-cwc5j\") pod \"redhat-operators-5h6qp\" (UID: \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\") " pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.467365 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:09 crc kubenswrapper[4482]: I1125 08:46:09.890379 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5h6qp"] Nov 25 08:46:10 crc kubenswrapper[4482]: I1125 08:46:10.722517 4482 generic.go:334] "Generic (PLEG): container finished" podID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" containerID="29ba4e66dc72ae735b7987642e3eae4274e711295d59f0ad34f8d6d6d5e29a4e" exitCode=0 Nov 25 08:46:10 crc kubenswrapper[4482]: I1125 08:46:10.722668 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h6qp" event={"ID":"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38","Type":"ContainerDied","Data":"29ba4e66dc72ae735b7987642e3eae4274e711295d59f0ad34f8d6d6d5e29a4e"} Nov 25 08:46:10 crc kubenswrapper[4482]: I1125 08:46:10.723095 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h6qp" event={"ID":"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38","Type":"ContainerStarted","Data":"7e7a0d4023f743da4ffe0ea7bacb852313e18460762c2bed9bb7b75ccafb2a8c"} Nov 25 08:46:11 crc kubenswrapper[4482]: I1125 08:46:11.734243 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h6qp" event={"ID":"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38","Type":"ContainerStarted","Data":"115981953bcee00c56f1e16e02288d9b93af99bbec48f15bd67acf77ee2749b0"} Nov 25 08:46:14 crc kubenswrapper[4482]: I1125 08:46:14.762870 4482 generic.go:334] "Generic (PLEG): container finished" podID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" containerID="115981953bcee00c56f1e16e02288d9b93af99bbec48f15bd67acf77ee2749b0" exitCode=0 Nov 25 08:46:14 crc kubenswrapper[4482]: I1125 08:46:14.762974 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h6qp" event={"ID":"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38","Type":"ContainerDied","Data":"115981953bcee00c56f1e16e02288d9b93af99bbec48f15bd67acf77ee2749b0"} Nov 25 08:46:15 crc kubenswrapper[4482]: I1125 08:46:15.786952 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h6qp" event={"ID":"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38","Type":"ContainerStarted","Data":"0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43"} Nov 25 08:46:15 crc kubenswrapper[4482]: I1125 08:46:15.813234 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5h6qp" podStartSLOduration=2.27961467 podStartE2EDuration="6.813211903s" podCreationTimestamp="2025-11-25 08:46:09 +0000 UTC" firstStartedPulling="2025-11-25 08:46:10.724947866 +0000 UTC m=+7145.213179126" lastFinishedPulling="2025-11-25 08:46:15.2585451 +0000 UTC m=+7149.746776359" observedRunningTime="2025-11-25 08:46:15.802911081 +0000 UTC m=+7150.291142340" watchObservedRunningTime="2025-11-25 08:46:15.813211903 +0000 UTC m=+7150.301443163" Nov 25 08:46:18 crc kubenswrapper[4482]: I1125 08:46:18.831226 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:46:18 crc kubenswrapper[4482]: E1125 08:46:18.831811 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:46:19 crc kubenswrapper[4482]: I1125 08:46:19.467894 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:19 crc kubenswrapper[4482]: I1125 08:46:19.467967 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:20 crc kubenswrapper[4482]: I1125 08:46:20.513975 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5h6qp" podUID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" containerName="registry-server" probeResult="failure" output=< Nov 25 08:46:20 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 08:46:20 crc kubenswrapper[4482]: > Nov 25 08:46:29 crc kubenswrapper[4482]: I1125 08:46:29.506735 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:29 crc kubenswrapper[4482]: I1125 08:46:29.551336 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:29 crc kubenswrapper[4482]: I1125 08:46:29.748636 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5h6qp"] Nov 25 08:46:30 crc kubenswrapper[4482]: I1125 08:46:30.906623 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5h6qp" podUID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" containerName="registry-server" containerID="cri-o://0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43" gracePeriod=2 Nov 25 08:46:31 crc kubenswrapper[4482]: E1125 08:46:31.094561 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4819e5d_b90f_4c4f_a8e9_8a1ebde67a38.slice/crio-0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43.scope\": RecentStats: unable to find data in memory cache]" Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.369546 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.476051 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwc5j\" (UniqueName: \"kubernetes.io/projected/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-kube-api-access-cwc5j\") pod \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\" (UID: \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\") " Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.476471 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-catalog-content\") pod \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\" (UID: \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\") " Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.479210 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-utilities\") pod \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\" (UID: \"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38\") " Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.479843 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-utilities" (OuterVolumeSpecName: "utilities") pod "d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" (UID: "d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.480908 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.499793 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-kube-api-access-cwc5j" (OuterVolumeSpecName: "kube-api-access-cwc5j") pod "d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" (UID: "d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38"). InnerVolumeSpecName "kube-api-access-cwc5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.569802 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" (UID: "d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.584950 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwc5j\" (UniqueName: \"kubernetes.io/projected/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-kube-api-access-cwc5j\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.584993 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.924748 4482 generic.go:334] "Generic (PLEG): container finished" podID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" containerID="0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43" exitCode=0 Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.924828 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5h6qp" Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.924821 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h6qp" event={"ID":"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38","Type":"ContainerDied","Data":"0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43"} Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.926144 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5h6qp" event={"ID":"d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38","Type":"ContainerDied","Data":"7e7a0d4023f743da4ffe0ea7bacb852313e18460762c2bed9bb7b75ccafb2a8c"} Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.926202 4482 scope.go:117] "RemoveContainer" containerID="0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43" Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.949206 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5h6qp"] Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.964380 4482 scope.go:117] "RemoveContainer" containerID="115981953bcee00c56f1e16e02288d9b93af99bbec48f15bd67acf77ee2749b0" Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.969976 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5h6qp"] Nov 25 08:46:31 crc kubenswrapper[4482]: I1125 08:46:31.992052 4482 scope.go:117] "RemoveContainer" containerID="29ba4e66dc72ae735b7987642e3eae4274e711295d59f0ad34f8d6d6d5e29a4e" Nov 25 08:46:32 crc kubenswrapper[4482]: I1125 08:46:32.023811 4482 scope.go:117] "RemoveContainer" containerID="0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43" Nov 25 08:46:32 crc kubenswrapper[4482]: E1125 08:46:32.024384 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43\": container with ID starting with 0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43 not found: ID does not exist" containerID="0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43" Nov 25 08:46:32 crc kubenswrapper[4482]: I1125 08:46:32.024435 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43"} err="failed to get container status \"0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43\": rpc error: code = NotFound desc = could not find container \"0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43\": container with ID starting with 0437e9699beea89a401f61152e9d915cd2996eadca1cde2a038fcf28b9f78c43 not found: ID does not exist" Nov 25 08:46:32 crc kubenswrapper[4482]: I1125 08:46:32.024470 4482 scope.go:117] "RemoveContainer" containerID="115981953bcee00c56f1e16e02288d9b93af99bbec48f15bd67acf77ee2749b0" Nov 25 08:46:32 crc kubenswrapper[4482]: E1125 08:46:32.024809 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"115981953bcee00c56f1e16e02288d9b93af99bbec48f15bd67acf77ee2749b0\": container with ID starting with 115981953bcee00c56f1e16e02288d9b93af99bbec48f15bd67acf77ee2749b0 not found: ID does not exist" containerID="115981953bcee00c56f1e16e02288d9b93af99bbec48f15bd67acf77ee2749b0" Nov 25 08:46:32 crc kubenswrapper[4482]: I1125 08:46:32.024935 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"115981953bcee00c56f1e16e02288d9b93af99bbec48f15bd67acf77ee2749b0"} err="failed to get container status \"115981953bcee00c56f1e16e02288d9b93af99bbec48f15bd67acf77ee2749b0\": rpc error: code = NotFound desc = could not find container \"115981953bcee00c56f1e16e02288d9b93af99bbec48f15bd67acf77ee2749b0\": container with ID starting with 115981953bcee00c56f1e16e02288d9b93af99bbec48f15bd67acf77ee2749b0 not found: ID does not exist" Nov 25 08:46:32 crc kubenswrapper[4482]: I1125 08:46:32.025033 4482 scope.go:117] "RemoveContainer" containerID="29ba4e66dc72ae735b7987642e3eae4274e711295d59f0ad34f8d6d6d5e29a4e" Nov 25 08:46:32 crc kubenswrapper[4482]: E1125 08:46:32.025442 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29ba4e66dc72ae735b7987642e3eae4274e711295d59f0ad34f8d6d6d5e29a4e\": container with ID starting with 29ba4e66dc72ae735b7987642e3eae4274e711295d59f0ad34f8d6d6d5e29a4e not found: ID does not exist" containerID="29ba4e66dc72ae735b7987642e3eae4274e711295d59f0ad34f8d6d6d5e29a4e" Nov 25 08:46:32 crc kubenswrapper[4482]: I1125 08:46:32.025471 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29ba4e66dc72ae735b7987642e3eae4274e711295d59f0ad34f8d6d6d5e29a4e"} err="failed to get container status \"29ba4e66dc72ae735b7987642e3eae4274e711295d59f0ad34f8d6d6d5e29a4e\": rpc error: code = NotFound desc = could not find container \"29ba4e66dc72ae735b7987642e3eae4274e711295d59f0ad34f8d6d6d5e29a4e\": container with ID starting with 29ba4e66dc72ae735b7987642e3eae4274e711295d59f0ad34f8d6d6d5e29a4e not found: ID does not exist" Nov 25 08:46:33 crc kubenswrapper[4482]: I1125 08:46:33.830965 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:46:33 crc kubenswrapper[4482]: E1125 08:46:33.831295 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:46:33 crc kubenswrapper[4482]: I1125 08:46:33.838365 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" path="/var/lib/kubelet/pods/d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38/volumes" Nov 25 08:46:46 crc kubenswrapper[4482]: I1125 08:46:46.830611 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:46:46 crc kubenswrapper[4482]: E1125 08:46:46.831541 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:46:57 crc kubenswrapper[4482]: I1125 08:46:57.832534 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:46:57 crc kubenswrapper[4482]: E1125 08:46:57.833640 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:47:11 crc kubenswrapper[4482]: I1125 08:47:11.830841 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:47:11 crc kubenswrapper[4482]: E1125 08:47:11.831599 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:47:25 crc kubenswrapper[4482]: I1125 08:47:25.834780 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:47:25 crc kubenswrapper[4482]: E1125 08:47:25.835379 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:47:39 crc kubenswrapper[4482]: I1125 08:47:39.832293 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:47:39 crc kubenswrapper[4482]: E1125 08:47:39.832761 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:47:52 crc kubenswrapper[4482]: I1125 08:47:52.831153 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:47:52 crc kubenswrapper[4482]: E1125 08:47:52.831752 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:48:07 crc kubenswrapper[4482]: I1125 08:48:07.831073 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:48:07 crc kubenswrapper[4482]: E1125 08:48:07.831968 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:48:22 crc kubenswrapper[4482]: I1125 08:48:22.831056 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:48:22 crc kubenswrapper[4482]: E1125 08:48:22.831674 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:48:34 crc kubenswrapper[4482]: I1125 08:48:34.831406 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:48:34 crc kubenswrapper[4482]: E1125 08:48:34.831903 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:48:46 crc kubenswrapper[4482]: I1125 08:48:46.831257 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:48:47 crc kubenswrapper[4482]: I1125 08:48:47.921585 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"31b6724dbfc647a108812f1750fa2f3d9b8f090eab33de2ec33a105bfbb27261"} Nov 25 08:49:30 crc kubenswrapper[4482]: I1125 08:49:30.905243 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75f48b77ff-w6nf7"] Nov 25 08:49:30 crc kubenswrapper[4482]: E1125 08:49:30.906070 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" containerName="extract-content" Nov 25 08:49:30 crc kubenswrapper[4482]: I1125 08:49:30.906083 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" containerName="extract-content" Nov 25 08:49:30 crc kubenswrapper[4482]: E1125 08:49:30.906102 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" containerName="extract-utilities" Nov 25 08:49:30 crc kubenswrapper[4482]: I1125 08:49:30.906108 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" containerName="extract-utilities" Nov 25 08:49:30 crc kubenswrapper[4482]: E1125 08:49:30.906114 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" containerName="registry-server" Nov 25 08:49:30 crc kubenswrapper[4482]: I1125 08:49:30.906120 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" containerName="registry-server" Nov 25 08:49:30 crc kubenswrapper[4482]: I1125 08:49:30.906386 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4819e5d-b90f-4c4f-a8e9-8a1ebde67a38" containerName="registry-server" Nov 25 08:49:30 crc kubenswrapper[4482]: I1125 08:49:30.907233 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:30 crc kubenswrapper[4482]: I1125 08:49:30.969985 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75f48b77ff-w6nf7"] Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.000309 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-public-tls-certs\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.000426 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-config\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.000456 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-internal-tls-certs\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.000493 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-ovndb-tls-certs\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.000525 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-combined-ca-bundle\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.000600 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-httpd-config\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.000663 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dghr\" (UniqueName: \"kubernetes.io/projected/f26de54e-5b3f-4759-b4e3-359f89af359f-kube-api-access-6dghr\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.102558 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-httpd-config\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.102868 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dghr\" (UniqueName: \"kubernetes.io/projected/f26de54e-5b3f-4759-b4e3-359f89af359f-kube-api-access-6dghr\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.103102 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-public-tls-certs\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.103322 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-config\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.103451 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-internal-tls-certs\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.103984 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-ovndb-tls-certs\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.104094 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-combined-ca-bundle\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.108891 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-combined-ca-bundle\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.108896 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-public-tls-certs\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.108896 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-ovndb-tls-certs\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.109186 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-config\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.109584 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-internal-tls-certs\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.110662 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f26de54e-5b3f-4759-b4e3-359f89af359f-httpd-config\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.121498 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dghr\" (UniqueName: \"kubernetes.io/projected/f26de54e-5b3f-4759-b4e3-359f89af359f-kube-api-access-6dghr\") pod \"neutron-75f48b77ff-w6nf7\" (UID: \"f26de54e-5b3f-4759-b4e3-359f89af359f\") " pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.222674 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:31 crc kubenswrapper[4482]: I1125 08:49:31.844553 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75f48b77ff-w6nf7"] Nov 25 08:49:32 crc kubenswrapper[4482]: I1125 08:49:32.202739 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75f48b77ff-w6nf7" event={"ID":"f26de54e-5b3f-4759-b4e3-359f89af359f","Type":"ContainerStarted","Data":"d6a71ec0441e5fcb2dbaae8b09ac35dcfef572f154ef9848b248aa2f012ff444"} Nov 25 08:49:32 crc kubenswrapper[4482]: I1125 08:49:32.202806 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75f48b77ff-w6nf7" event={"ID":"f26de54e-5b3f-4759-b4e3-359f89af359f","Type":"ContainerStarted","Data":"c922cb05788d984890c8d64fe5aab5e6664844eafe9a2d680426d013b612c35d"} Nov 25 08:49:32 crc kubenswrapper[4482]: I1125 08:49:32.202817 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75f48b77ff-w6nf7" event={"ID":"f26de54e-5b3f-4759-b4e3-359f89af359f","Type":"ContainerStarted","Data":"c7aa9f9bf95dab3b9dda8fa8261894a32b54bc0eb8636c41d89c092c35bc73d4"} Nov 25 08:49:32 crc kubenswrapper[4482]: I1125 08:49:32.204343 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:49:32 crc kubenswrapper[4482]: I1125 08:49:32.226566 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75f48b77ff-w6nf7" podStartSLOduration=2.2265539309999998 podStartE2EDuration="2.226553931s" podCreationTimestamp="2025-11-25 08:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:49:32.22113072 +0000 UTC m=+7346.709361979" watchObservedRunningTime="2025-11-25 08:49:32.226553931 +0000 UTC m=+7346.714785190" Nov 25 08:50:01 crc kubenswrapper[4482]: I1125 08:50:01.237600 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75f48b77ff-w6nf7" Nov 25 08:50:01 crc kubenswrapper[4482]: I1125 08:50:01.324386 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b7cdd7c85-7hng5"] Nov 25 08:50:01 crc kubenswrapper[4482]: I1125 08:50:01.324647 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b7cdd7c85-7hng5" podUID="a4ae615c-dd7c-4ffe-968d-369d0b26c25b" containerName="neutron-api" containerID="cri-o://2f8fd4653c37e5bd6ca2f5d9641fd6668159c731193eefe862e2802305759e0c" gracePeriod=30 Nov 25 08:50:01 crc kubenswrapper[4482]: I1125 08:50:01.325137 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b7cdd7c85-7hng5" podUID="a4ae615c-dd7c-4ffe-968d-369d0b26c25b" containerName="neutron-httpd" containerID="cri-o://b324c6af9c636d86140c0c634f9be6bad4477f7497f7bd026efa1e667c704d70" gracePeriod=30 Nov 25 08:50:02 crc kubenswrapper[4482]: I1125 08:50:02.547111 4482 generic.go:334] "Generic (PLEG): container finished" podID="a4ae615c-dd7c-4ffe-968d-369d0b26c25b" containerID="b324c6af9c636d86140c0c634f9be6bad4477f7497f7bd026efa1e667c704d70" exitCode=0 Nov 25 08:50:02 crc kubenswrapper[4482]: I1125 08:50:02.547379 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7cdd7c85-7hng5" event={"ID":"a4ae615c-dd7c-4ffe-968d-369d0b26c25b","Type":"ContainerDied","Data":"b324c6af9c636d86140c0c634f9be6bad4477f7497f7bd026efa1e667c704d70"} Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.668623 4482 generic.go:334] "Generic (PLEG): container finished" podID="a4ae615c-dd7c-4ffe-968d-369d0b26c25b" containerID="2f8fd4653c37e5bd6ca2f5d9641fd6668159c731193eefe862e2802305759e0c" exitCode=0 Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.668837 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7cdd7c85-7hng5" event={"ID":"a4ae615c-dd7c-4ffe-968d-369d0b26c25b","Type":"ContainerDied","Data":"2f8fd4653c37e5bd6ca2f5d9641fd6668159c731193eefe862e2802305759e0c"} Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.809974 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.821560 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-public-tls-certs\") pod \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.821870 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpppq\" (UniqueName: \"kubernetes.io/projected/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-kube-api-access-qpppq\") pod \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.822031 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-internal-tls-certs\") pod \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.822104 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-ovndb-tls-certs\") pod \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.822211 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-combined-ca-bundle\") pod \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.822240 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-config\") pod \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.822267 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-httpd-config\") pod \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\" (UID: \"a4ae615c-dd7c-4ffe-968d-369d0b26c25b\") " Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.836065 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a4ae615c-dd7c-4ffe-968d-369d0b26c25b" (UID: "a4ae615c-dd7c-4ffe-968d-369d0b26c25b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.877397 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-kube-api-access-qpppq" (OuterVolumeSpecName: "kube-api-access-qpppq") pod "a4ae615c-dd7c-4ffe-968d-369d0b26c25b" (UID: "a4ae615c-dd7c-4ffe-968d-369d0b26c25b"). InnerVolumeSpecName "kube-api-access-qpppq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.912579 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a4ae615c-dd7c-4ffe-968d-369d0b26c25b" (UID: "a4ae615c-dd7c-4ffe-968d-369d0b26c25b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.913546 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a4ae615c-dd7c-4ffe-968d-369d0b26c25b" (UID: "a4ae615c-dd7c-4ffe-968d-369d0b26c25b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.913916 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-config" (OuterVolumeSpecName: "config") pod "a4ae615c-dd7c-4ffe-968d-369d0b26c25b" (UID: "a4ae615c-dd7c-4ffe-968d-369d0b26c25b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.920777 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4ae615c-dd7c-4ffe-968d-369d0b26c25b" (UID: "a4ae615c-dd7c-4ffe-968d-369d0b26c25b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.926574 4482 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.926603 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpppq\" (UniqueName: \"kubernetes.io/projected/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-kube-api-access-qpppq\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.926616 4482 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.926625 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.926634 4482 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.926647 4482 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:14 crc kubenswrapper[4482]: I1125 08:50:14.937314 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a4ae615c-dd7c-4ffe-968d-369d0b26c25b" (UID: "a4ae615c-dd7c-4ffe-968d-369d0b26c25b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:50:15 crc kubenswrapper[4482]: I1125 08:50:15.029618 4482 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4ae615c-dd7c-4ffe-968d-369d0b26c25b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:50:15 crc kubenswrapper[4482]: I1125 08:50:15.684047 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7cdd7c85-7hng5" event={"ID":"a4ae615c-dd7c-4ffe-968d-369d0b26c25b","Type":"ContainerDied","Data":"45f26c1c640192f8a01076c765be8918b5cc3aac615303140a2a593f67c13bd7"} Nov 25 08:50:15 crc kubenswrapper[4482]: I1125 08:50:15.684142 4482 scope.go:117] "RemoveContainer" containerID="b324c6af9c636d86140c0c634f9be6bad4477f7497f7bd026efa1e667c704d70" Nov 25 08:50:15 crc kubenswrapper[4482]: I1125 08:50:15.684156 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7cdd7c85-7hng5" Nov 25 08:50:15 crc kubenswrapper[4482]: I1125 08:50:15.721517 4482 scope.go:117] "RemoveContainer" containerID="2f8fd4653c37e5bd6ca2f5d9641fd6668159c731193eefe862e2802305759e0c" Nov 25 08:50:15 crc kubenswrapper[4482]: I1125 08:50:15.732990 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b7cdd7c85-7hng5"] Nov 25 08:50:15 crc kubenswrapper[4482]: I1125 08:50:15.756770 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b7cdd7c85-7hng5"] Nov 25 08:50:15 crc kubenswrapper[4482]: I1125 08:50:15.843995 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ae615c-dd7c-4ffe-968d-369d0b26c25b" path="/var/lib/kubelet/pods/a4ae615c-dd7c-4ffe-968d-369d0b26c25b/volumes" Nov 25 08:50:15 crc kubenswrapper[4482]: E1125 08:50:15.900493 4482 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4ae615c_dd7c_4ffe_968d_369d0b26c25b.slice/crio-45f26c1c640192f8a01076c765be8918b5cc3aac615303140a2a593f67c13bd7\": RecentStats: unable to find data in memory cache]" Nov 25 08:51:09 crc kubenswrapper[4482]: I1125 08:51:09.117696 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:51:09 crc kubenswrapper[4482]: I1125 08:51:09.118358 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:51:23 crc kubenswrapper[4482]: E1125 08:51:23.822700 4482 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.26.133:45700->192.168.26.133:42749: write tcp 192.168.26.133:45700->192.168.26.133:42749: write: broken pipe Nov 25 08:51:39 crc kubenswrapper[4482]: I1125 08:51:39.117528 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:51:39 crc kubenswrapper[4482]: I1125 08:51:39.118100 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:52:09 crc kubenswrapper[4482]: I1125 08:52:09.117797 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:52:09 crc kubenswrapper[4482]: I1125 08:52:09.119465 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:52:09 crc kubenswrapper[4482]: I1125 08:52:09.119591 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 08:52:09 crc kubenswrapper[4482]: I1125 08:52:09.120537 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31b6724dbfc647a108812f1750fa2f3d9b8f090eab33de2ec33a105bfbb27261"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:52:09 crc kubenswrapper[4482]: I1125 08:52:09.120675 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://31b6724dbfc647a108812f1750fa2f3d9b8f090eab33de2ec33a105bfbb27261" gracePeriod=600 Nov 25 08:52:09 crc kubenswrapper[4482]: I1125 08:52:09.753459 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="31b6724dbfc647a108812f1750fa2f3d9b8f090eab33de2ec33a105bfbb27261" exitCode=0 Nov 25 08:52:09 crc kubenswrapper[4482]: I1125 08:52:09.753558 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"31b6724dbfc647a108812f1750fa2f3d9b8f090eab33de2ec33a105bfbb27261"} Nov 25 08:52:09 crc kubenswrapper[4482]: I1125 08:52:09.753797 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40"} Nov 25 08:52:09 crc kubenswrapper[4482]: I1125 08:52:09.753826 4482 scope.go:117] "RemoveContainer" containerID="fe079070fc4b88015287b3490d394faad058da084d7f2595ec148441c72c9514" Nov 25 08:52:56 crc kubenswrapper[4482]: I1125 08:52:56.947152 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bvjzk"] Nov 25 08:52:56 crc kubenswrapper[4482]: E1125 08:52:56.947875 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ae615c-dd7c-4ffe-968d-369d0b26c25b" containerName="neutron-httpd" Nov 25 08:52:56 crc kubenswrapper[4482]: I1125 08:52:56.947886 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ae615c-dd7c-4ffe-968d-369d0b26c25b" containerName="neutron-httpd" Nov 25 08:52:56 crc kubenswrapper[4482]: E1125 08:52:56.947896 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ae615c-dd7c-4ffe-968d-369d0b26c25b" containerName="neutron-api" Nov 25 08:52:56 crc kubenswrapper[4482]: I1125 08:52:56.947902 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ae615c-dd7c-4ffe-968d-369d0b26c25b" containerName="neutron-api" Nov 25 08:52:56 crc kubenswrapper[4482]: I1125 08:52:56.948122 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ae615c-dd7c-4ffe-968d-369d0b26c25b" containerName="neutron-httpd" Nov 25 08:52:56 crc kubenswrapper[4482]: I1125 08:52:56.948134 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ae615c-dd7c-4ffe-968d-369d0b26c25b" containerName="neutron-api" Nov 25 08:52:56 crc kubenswrapper[4482]: I1125 08:52:56.956612 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:52:56 crc kubenswrapper[4482]: I1125 08:52:56.963941 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bvjzk"] Nov 25 08:52:57 crc kubenswrapper[4482]: I1125 08:52:57.075211 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvjr9\" (UniqueName: \"kubernetes.io/projected/b6e4d8e6-5337-4065-8599-0ab7383404a7-kube-api-access-cvjr9\") pod \"certified-operators-bvjzk\" (UID: \"b6e4d8e6-5337-4065-8599-0ab7383404a7\") " pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:52:57 crc kubenswrapper[4482]: I1125 08:52:57.075376 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6e4d8e6-5337-4065-8599-0ab7383404a7-utilities\") pod \"certified-operators-bvjzk\" (UID: \"b6e4d8e6-5337-4065-8599-0ab7383404a7\") " pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:52:57 crc kubenswrapper[4482]: I1125 08:52:57.076503 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6e4d8e6-5337-4065-8599-0ab7383404a7-catalog-content\") pod \"certified-operators-bvjzk\" (UID: \"b6e4d8e6-5337-4065-8599-0ab7383404a7\") " pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:52:57 crc kubenswrapper[4482]: I1125 08:52:57.178377 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6e4d8e6-5337-4065-8599-0ab7383404a7-catalog-content\") pod \"certified-operators-bvjzk\" (UID: \"b6e4d8e6-5337-4065-8599-0ab7383404a7\") " pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:52:57 crc kubenswrapper[4482]: I1125 08:52:57.178562 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvjr9\" (UniqueName: \"kubernetes.io/projected/b6e4d8e6-5337-4065-8599-0ab7383404a7-kube-api-access-cvjr9\") pod \"certified-operators-bvjzk\" (UID: \"b6e4d8e6-5337-4065-8599-0ab7383404a7\") " pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:52:57 crc kubenswrapper[4482]: I1125 08:52:57.178680 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6e4d8e6-5337-4065-8599-0ab7383404a7-utilities\") pod \"certified-operators-bvjzk\" (UID: \"b6e4d8e6-5337-4065-8599-0ab7383404a7\") " pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:52:57 crc kubenswrapper[4482]: I1125 08:52:57.178894 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6e4d8e6-5337-4065-8599-0ab7383404a7-catalog-content\") pod \"certified-operators-bvjzk\" (UID: \"b6e4d8e6-5337-4065-8599-0ab7383404a7\") " pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:52:57 crc kubenswrapper[4482]: I1125 08:52:57.179073 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6e4d8e6-5337-4065-8599-0ab7383404a7-utilities\") pod \"certified-operators-bvjzk\" (UID: \"b6e4d8e6-5337-4065-8599-0ab7383404a7\") " pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:52:57 crc kubenswrapper[4482]: I1125 08:52:57.200720 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvjr9\" (UniqueName: \"kubernetes.io/projected/b6e4d8e6-5337-4065-8599-0ab7383404a7-kube-api-access-cvjr9\") pod \"certified-operators-bvjzk\" (UID: \"b6e4d8e6-5337-4065-8599-0ab7383404a7\") " pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:52:57 crc kubenswrapper[4482]: I1125 08:52:57.273229 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:52:57 crc kubenswrapper[4482]: I1125 08:52:57.770677 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bvjzk"] Nov 25 08:52:58 crc kubenswrapper[4482]: I1125 08:52:58.222892 4482 generic.go:334] "Generic (PLEG): container finished" podID="b6e4d8e6-5337-4065-8599-0ab7383404a7" containerID="81187f408ca18534d5128cd9ef30e315174f2f65d3608ae22a97cb4aabecaddc" exitCode=0 Nov 25 08:52:58 crc kubenswrapper[4482]: I1125 08:52:58.222988 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvjzk" event={"ID":"b6e4d8e6-5337-4065-8599-0ab7383404a7","Type":"ContainerDied","Data":"81187f408ca18534d5128cd9ef30e315174f2f65d3608ae22a97cb4aabecaddc"} Nov 25 08:52:58 crc kubenswrapper[4482]: I1125 08:52:58.223096 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvjzk" event={"ID":"b6e4d8e6-5337-4065-8599-0ab7383404a7","Type":"ContainerStarted","Data":"4560b0b07f13548049d1013b4859fea0eb654ea2f1dc060cdebc44564d5f388b"} Nov 25 08:52:58 crc kubenswrapper[4482]: I1125 08:52:58.226000 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:52:59 crc kubenswrapper[4482]: I1125 08:52:59.235009 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvjzk" event={"ID":"b6e4d8e6-5337-4065-8599-0ab7383404a7","Type":"ContainerStarted","Data":"c46a032b7e3875c638aec7e789725e240ac0d9925ff40dc2d3b17dc10933a72b"} Nov 25 08:53:00 crc kubenswrapper[4482]: I1125 08:53:00.244210 4482 generic.go:334] "Generic (PLEG): container finished" podID="b6e4d8e6-5337-4065-8599-0ab7383404a7" containerID="c46a032b7e3875c638aec7e789725e240ac0d9925ff40dc2d3b17dc10933a72b" exitCode=0 Nov 25 08:53:00 crc kubenswrapper[4482]: I1125 08:53:00.244353 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvjzk" event={"ID":"b6e4d8e6-5337-4065-8599-0ab7383404a7","Type":"ContainerDied","Data":"c46a032b7e3875c638aec7e789725e240ac0d9925ff40dc2d3b17dc10933a72b"} Nov 25 08:53:01 crc kubenswrapper[4482]: I1125 08:53:01.255807 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvjzk" event={"ID":"b6e4d8e6-5337-4065-8599-0ab7383404a7","Type":"ContainerStarted","Data":"27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563"} Nov 25 08:53:01 crc kubenswrapper[4482]: I1125 08:53:01.273258 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bvjzk" podStartSLOduration=2.725593818 podStartE2EDuration="5.273236077s" podCreationTimestamp="2025-11-25 08:52:56 +0000 UTC" firstStartedPulling="2025-11-25 08:52:58.225722184 +0000 UTC m=+7552.713953443" lastFinishedPulling="2025-11-25 08:53:00.773364443 +0000 UTC m=+7555.261595702" observedRunningTime="2025-11-25 08:53:01.268957944 +0000 UTC m=+7555.757189204" watchObservedRunningTime="2025-11-25 08:53:01.273236077 +0000 UTC m=+7555.761467336" Nov 25 08:53:07 crc kubenswrapper[4482]: I1125 08:53:07.273506 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:53:07 crc kubenswrapper[4482]: I1125 08:53:07.274273 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:53:07 crc kubenswrapper[4482]: I1125 08:53:07.315737 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:53:07 crc kubenswrapper[4482]: I1125 08:53:07.354610 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:53:07 crc kubenswrapper[4482]: I1125 08:53:07.555088 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bvjzk"] Nov 25 08:53:09 crc kubenswrapper[4482]: I1125 08:53:09.322053 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bvjzk" podUID="b6e4d8e6-5337-4065-8599-0ab7383404a7" containerName="registry-server" containerID="cri-o://27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563" gracePeriod=2 Nov 25 08:53:09 crc kubenswrapper[4482]: I1125 08:53:09.793518 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:53:09 crc kubenswrapper[4482]: I1125 08:53:09.872878 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvjr9\" (UniqueName: \"kubernetes.io/projected/b6e4d8e6-5337-4065-8599-0ab7383404a7-kube-api-access-cvjr9\") pod \"b6e4d8e6-5337-4065-8599-0ab7383404a7\" (UID: \"b6e4d8e6-5337-4065-8599-0ab7383404a7\") " Nov 25 08:53:09 crc kubenswrapper[4482]: I1125 08:53:09.874040 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6e4d8e6-5337-4065-8599-0ab7383404a7-utilities\") pod \"b6e4d8e6-5337-4065-8599-0ab7383404a7\" (UID: \"b6e4d8e6-5337-4065-8599-0ab7383404a7\") " Nov 25 08:53:09 crc kubenswrapper[4482]: I1125 08:53:09.874230 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6e4d8e6-5337-4065-8599-0ab7383404a7-catalog-content\") pod \"b6e4d8e6-5337-4065-8599-0ab7383404a7\" (UID: \"b6e4d8e6-5337-4065-8599-0ab7383404a7\") " Nov 25 08:53:09 crc kubenswrapper[4482]: I1125 08:53:09.875064 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6e4d8e6-5337-4065-8599-0ab7383404a7-utilities" (OuterVolumeSpecName: "utilities") pod "b6e4d8e6-5337-4065-8599-0ab7383404a7" (UID: "b6e4d8e6-5337-4065-8599-0ab7383404a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:53:09 crc kubenswrapper[4482]: I1125 08:53:09.875193 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6e4d8e6-5337-4065-8599-0ab7383404a7-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:53:09 crc kubenswrapper[4482]: I1125 08:53:09.884449 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e4d8e6-5337-4065-8599-0ab7383404a7-kube-api-access-cvjr9" (OuterVolumeSpecName: "kube-api-access-cvjr9") pod "b6e4d8e6-5337-4065-8599-0ab7383404a7" (UID: "b6e4d8e6-5337-4065-8599-0ab7383404a7"). InnerVolumeSpecName "kube-api-access-cvjr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:53:09 crc kubenswrapper[4482]: I1125 08:53:09.916311 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6e4d8e6-5337-4065-8599-0ab7383404a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6e4d8e6-5337-4065-8599-0ab7383404a7" (UID: "b6e4d8e6-5337-4065-8599-0ab7383404a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:53:09 crc kubenswrapper[4482]: I1125 08:53:09.977469 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvjr9\" (UniqueName: \"kubernetes.io/projected/b6e4d8e6-5337-4065-8599-0ab7383404a7-kube-api-access-cvjr9\") on node \"crc\" DevicePath \"\"" Nov 25 08:53:09 crc kubenswrapper[4482]: I1125 08:53:09.977505 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6e4d8e6-5337-4065-8599-0ab7383404a7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.332587 4482 generic.go:334] "Generic (PLEG): container finished" podID="b6e4d8e6-5337-4065-8599-0ab7383404a7" containerID="27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563" exitCode=0 Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.332650 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvjzk" Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.332673 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvjzk" event={"ID":"b6e4d8e6-5337-4065-8599-0ab7383404a7","Type":"ContainerDied","Data":"27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563"} Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.332921 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvjzk" event={"ID":"b6e4d8e6-5337-4065-8599-0ab7383404a7","Type":"ContainerDied","Data":"4560b0b07f13548049d1013b4859fea0eb654ea2f1dc060cdebc44564d5f388b"} Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.332944 4482 scope.go:117] "RemoveContainer" containerID="27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563" Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.352563 4482 scope.go:117] "RemoveContainer" containerID="c46a032b7e3875c638aec7e789725e240ac0d9925ff40dc2d3b17dc10933a72b" Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.360132 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bvjzk"] Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.374210 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bvjzk"] Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.380893 4482 scope.go:117] "RemoveContainer" containerID="81187f408ca18534d5128cd9ef30e315174f2f65d3608ae22a97cb4aabecaddc" Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.407285 4482 scope.go:117] "RemoveContainer" containerID="27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563" Nov 25 08:53:10 crc kubenswrapper[4482]: E1125 08:53:10.407912 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563\": container with ID starting with 27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563 not found: ID does not exist" containerID="27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563" Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.407985 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563"} err="failed to get container status \"27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563\": rpc error: code = NotFound desc = could not find container \"27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563\": container with ID starting with 27a8ba2c79add787ee8e12904c8af628c7d2772915bd74a7541b05616fe65563 not found: ID does not exist" Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.408012 4482 scope.go:117] "RemoveContainer" containerID="c46a032b7e3875c638aec7e789725e240ac0d9925ff40dc2d3b17dc10933a72b" Nov 25 08:53:10 crc kubenswrapper[4482]: E1125 08:53:10.408469 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c46a032b7e3875c638aec7e789725e240ac0d9925ff40dc2d3b17dc10933a72b\": container with ID starting with c46a032b7e3875c638aec7e789725e240ac0d9925ff40dc2d3b17dc10933a72b not found: ID does not exist" containerID="c46a032b7e3875c638aec7e789725e240ac0d9925ff40dc2d3b17dc10933a72b" Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.408504 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c46a032b7e3875c638aec7e789725e240ac0d9925ff40dc2d3b17dc10933a72b"} err="failed to get container status \"c46a032b7e3875c638aec7e789725e240ac0d9925ff40dc2d3b17dc10933a72b\": rpc error: code = NotFound desc = could not find container \"c46a032b7e3875c638aec7e789725e240ac0d9925ff40dc2d3b17dc10933a72b\": container with ID starting with c46a032b7e3875c638aec7e789725e240ac0d9925ff40dc2d3b17dc10933a72b not found: ID does not exist" Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.408532 4482 scope.go:117] "RemoveContainer" containerID="81187f408ca18534d5128cd9ef30e315174f2f65d3608ae22a97cb4aabecaddc" Nov 25 08:53:10 crc kubenswrapper[4482]: E1125 08:53:10.408790 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81187f408ca18534d5128cd9ef30e315174f2f65d3608ae22a97cb4aabecaddc\": container with ID starting with 81187f408ca18534d5128cd9ef30e315174f2f65d3608ae22a97cb4aabecaddc not found: ID does not exist" containerID="81187f408ca18534d5128cd9ef30e315174f2f65d3608ae22a97cb4aabecaddc" Nov 25 08:53:10 crc kubenswrapper[4482]: I1125 08:53:10.408831 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81187f408ca18534d5128cd9ef30e315174f2f65d3608ae22a97cb4aabecaddc"} err="failed to get container status \"81187f408ca18534d5128cd9ef30e315174f2f65d3608ae22a97cb4aabecaddc\": rpc error: code = NotFound desc = could not find container \"81187f408ca18534d5128cd9ef30e315174f2f65d3608ae22a97cb4aabecaddc\": container with ID starting with 81187f408ca18534d5128cd9ef30e315174f2f65d3608ae22a97cb4aabecaddc not found: ID does not exist" Nov 25 08:53:11 crc kubenswrapper[4482]: I1125 08:53:11.840036 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6e4d8e6-5337-4065-8599-0ab7383404a7" path="/var/lib/kubelet/pods/b6e4d8e6-5337-4065-8599-0ab7383404a7/volumes" Nov 25 08:54:09 crc kubenswrapper[4482]: I1125 08:54:09.117583 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:54:09 crc kubenswrapper[4482]: I1125 08:54:09.117873 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:54:39 crc kubenswrapper[4482]: I1125 08:54:39.117396 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:54:39 crc kubenswrapper[4482]: I1125 08:54:39.117784 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:55:09 crc kubenswrapper[4482]: I1125 08:55:09.117414 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:55:09 crc kubenswrapper[4482]: I1125 08:55:09.117734 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:55:09 crc kubenswrapper[4482]: I1125 08:55:09.117777 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 08:55:09 crc kubenswrapper[4482]: I1125 08:55:09.118293 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:55:09 crc kubenswrapper[4482]: I1125 08:55:09.118341 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" gracePeriod=600 Nov 25 08:55:09 crc kubenswrapper[4482]: E1125 08:55:09.235244 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:55:09 crc kubenswrapper[4482]: I1125 08:55:09.384070 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" exitCode=0 Nov 25 08:55:09 crc kubenswrapper[4482]: I1125 08:55:09.384130 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40"} Nov 25 08:55:09 crc kubenswrapper[4482]: I1125 08:55:09.384178 4482 scope.go:117] "RemoveContainer" containerID="31b6724dbfc647a108812f1750fa2f3d9b8f090eab33de2ec33a105bfbb27261" Nov 25 08:55:09 crc kubenswrapper[4482]: I1125 08:55:09.384799 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:55:09 crc kubenswrapper[4482]: E1125 08:55:09.385123 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:55:23 crc kubenswrapper[4482]: I1125 08:55:23.831001 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:55:23 crc kubenswrapper[4482]: E1125 08:55:23.831936 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:55:36 crc kubenswrapper[4482]: I1125 08:55:36.830948 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:55:36 crc kubenswrapper[4482]: E1125 08:55:36.831659 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:55:50 crc kubenswrapper[4482]: I1125 08:55:50.831554 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:55:50 crc kubenswrapper[4482]: E1125 08:55:50.832418 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:56:05 crc kubenswrapper[4482]: I1125 08:56:05.837557 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:56:05 crc kubenswrapper[4482]: E1125 08:56:05.839261 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.385588 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x4k9d"] Nov 25 08:56:13 crc kubenswrapper[4482]: E1125 08:56:13.387379 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e4d8e6-5337-4065-8599-0ab7383404a7" containerName="registry-server" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.387457 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e4d8e6-5337-4065-8599-0ab7383404a7" containerName="registry-server" Nov 25 08:56:13 crc kubenswrapper[4482]: E1125 08:56:13.387518 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e4d8e6-5337-4065-8599-0ab7383404a7" containerName="extract-utilities" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.387561 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e4d8e6-5337-4065-8599-0ab7383404a7" containerName="extract-utilities" Nov 25 08:56:13 crc kubenswrapper[4482]: E1125 08:56:13.387613 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e4d8e6-5337-4065-8599-0ab7383404a7" containerName="extract-content" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.387656 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e4d8e6-5337-4065-8599-0ab7383404a7" containerName="extract-content" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.387923 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e4d8e6-5337-4065-8599-0ab7383404a7" containerName="registry-server" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.390226 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.394574 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4k9d"] Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.489588 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf8zm\" (UniqueName: \"kubernetes.io/projected/214b1692-0e40-455b-b526-9621df939595-kube-api-access-cf8zm\") pod \"redhat-operators-x4k9d\" (UID: \"214b1692-0e40-455b-b526-9621df939595\") " pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.489815 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214b1692-0e40-455b-b526-9621df939595-catalog-content\") pod \"redhat-operators-x4k9d\" (UID: \"214b1692-0e40-455b-b526-9621df939595\") " pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.490112 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214b1692-0e40-455b-b526-9621df939595-utilities\") pod \"redhat-operators-x4k9d\" (UID: \"214b1692-0e40-455b-b526-9621df939595\") " pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.591266 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214b1692-0e40-455b-b526-9621df939595-catalog-content\") pod \"redhat-operators-x4k9d\" (UID: \"214b1692-0e40-455b-b526-9621df939595\") " pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.591306 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf8zm\" (UniqueName: \"kubernetes.io/projected/214b1692-0e40-455b-b526-9621df939595-kube-api-access-cf8zm\") pod \"redhat-operators-x4k9d\" (UID: \"214b1692-0e40-455b-b526-9621df939595\") " pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.591446 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214b1692-0e40-455b-b526-9621df939595-utilities\") pod \"redhat-operators-x4k9d\" (UID: \"214b1692-0e40-455b-b526-9621df939595\") " pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.591806 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214b1692-0e40-455b-b526-9621df939595-utilities\") pod \"redhat-operators-x4k9d\" (UID: \"214b1692-0e40-455b-b526-9621df939595\") " pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.592015 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214b1692-0e40-455b-b526-9621df939595-catalog-content\") pod \"redhat-operators-x4k9d\" (UID: \"214b1692-0e40-455b-b526-9621df939595\") " pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.608140 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf8zm\" (UniqueName: \"kubernetes.io/projected/214b1692-0e40-455b-b526-9621df939595-kube-api-access-cf8zm\") pod \"redhat-operators-x4k9d\" (UID: \"214b1692-0e40-455b-b526-9621df939595\") " pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:13 crc kubenswrapper[4482]: I1125 08:56:13.704212 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:14 crc kubenswrapper[4482]: I1125 08:56:14.317273 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4k9d"] Nov 25 08:56:14 crc kubenswrapper[4482]: I1125 08:56:14.911895 4482 generic.go:334] "Generic (PLEG): container finished" podID="214b1692-0e40-455b-b526-9621df939595" containerID="e691237ad61f0f6c7c40621d49cace0f2a4ab9b1a5f310ae1c4d3e8181a9726b" exitCode=0 Nov 25 08:56:14 crc kubenswrapper[4482]: I1125 08:56:14.911991 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4k9d" event={"ID":"214b1692-0e40-455b-b526-9621df939595","Type":"ContainerDied","Data":"e691237ad61f0f6c7c40621d49cace0f2a4ab9b1a5f310ae1c4d3e8181a9726b"} Nov 25 08:56:14 crc kubenswrapper[4482]: I1125 08:56:14.912249 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4k9d" event={"ID":"214b1692-0e40-455b-b526-9621df939595","Type":"ContainerStarted","Data":"b75eaf20771d7f3ce72a22098c367e24542cb39c10d0acb1f65d31f1fa808c01"} Nov 25 08:56:15 crc kubenswrapper[4482]: I1125 08:56:15.924476 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4k9d" event={"ID":"214b1692-0e40-455b-b526-9621df939595","Type":"ContainerStarted","Data":"7aac783b6177587c63e259c8a97b5431fe8e68c94a14ee4e65adf54018564de8"} Nov 25 08:56:17 crc kubenswrapper[4482]: I1125 08:56:17.947997 4482 generic.go:334] "Generic (PLEG): container finished" podID="214b1692-0e40-455b-b526-9621df939595" containerID="7aac783b6177587c63e259c8a97b5431fe8e68c94a14ee4e65adf54018564de8" exitCode=0 Nov 25 08:56:17 crc kubenswrapper[4482]: I1125 08:56:17.948087 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4k9d" event={"ID":"214b1692-0e40-455b-b526-9621df939595","Type":"ContainerDied","Data":"7aac783b6177587c63e259c8a97b5431fe8e68c94a14ee4e65adf54018564de8"} Nov 25 08:56:18 crc kubenswrapper[4482]: I1125 08:56:18.964462 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4k9d" event={"ID":"214b1692-0e40-455b-b526-9621df939595","Type":"ContainerStarted","Data":"9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e"} Nov 25 08:56:18 crc kubenswrapper[4482]: I1125 08:56:18.982616 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x4k9d" podStartSLOduration=2.3500267409999998 podStartE2EDuration="5.982594927s" podCreationTimestamp="2025-11-25 08:56:13 +0000 UTC" firstStartedPulling="2025-11-25 08:56:14.91333536 +0000 UTC m=+7749.401566619" lastFinishedPulling="2025-11-25 08:56:18.545903546 +0000 UTC m=+7753.034134805" observedRunningTime="2025-11-25 08:56:18.979997481 +0000 UTC m=+7753.468228741" watchObservedRunningTime="2025-11-25 08:56:18.982594927 +0000 UTC m=+7753.470826186" Nov 25 08:56:19 crc kubenswrapper[4482]: I1125 08:56:19.831557 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:56:19 crc kubenswrapper[4482]: E1125 08:56:19.832274 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:56:23 crc kubenswrapper[4482]: I1125 08:56:23.704611 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:23 crc kubenswrapper[4482]: I1125 08:56:23.705880 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:24 crc kubenswrapper[4482]: I1125 08:56:24.742964 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4k9d" podUID="214b1692-0e40-455b-b526-9621df939595" containerName="registry-server" probeResult="failure" output=< Nov 25 08:56:24 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 08:56:24 crc kubenswrapper[4482]: > Nov 25 08:56:32 crc kubenswrapper[4482]: I1125 08:56:32.831231 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:56:32 crc kubenswrapper[4482]: E1125 08:56:32.831785 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:56:33 crc kubenswrapper[4482]: I1125 08:56:33.746543 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:33 crc kubenswrapper[4482]: I1125 08:56:33.782331 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:33 crc kubenswrapper[4482]: I1125 08:56:33.979739 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4k9d"] Nov 25 08:56:35 crc kubenswrapper[4482]: I1125 08:56:35.108212 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x4k9d" podUID="214b1692-0e40-455b-b526-9621df939595" containerName="registry-server" containerID="cri-o://9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e" gracePeriod=2 Nov 25 08:56:35 crc kubenswrapper[4482]: I1125 08:56:35.610791 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:35 crc kubenswrapper[4482]: I1125 08:56:35.777525 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214b1692-0e40-455b-b526-9621df939595-utilities\") pod \"214b1692-0e40-455b-b526-9621df939595\" (UID: \"214b1692-0e40-455b-b526-9621df939595\") " Nov 25 08:56:35 crc kubenswrapper[4482]: I1125 08:56:35.777770 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214b1692-0e40-455b-b526-9621df939595-catalog-content\") pod \"214b1692-0e40-455b-b526-9621df939595\" (UID: \"214b1692-0e40-455b-b526-9621df939595\") " Nov 25 08:56:35 crc kubenswrapper[4482]: I1125 08:56:35.778070 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf8zm\" (UniqueName: \"kubernetes.io/projected/214b1692-0e40-455b-b526-9621df939595-kube-api-access-cf8zm\") pod \"214b1692-0e40-455b-b526-9621df939595\" (UID: \"214b1692-0e40-455b-b526-9621df939595\") " Nov 25 08:56:35 crc kubenswrapper[4482]: I1125 08:56:35.779057 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214b1692-0e40-455b-b526-9621df939595-utilities" (OuterVolumeSpecName: "utilities") pod "214b1692-0e40-455b-b526-9621df939595" (UID: "214b1692-0e40-455b-b526-9621df939595"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:56:35 crc kubenswrapper[4482]: I1125 08:56:35.786712 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214b1692-0e40-455b-b526-9621df939595-kube-api-access-cf8zm" (OuterVolumeSpecName: "kube-api-access-cf8zm") pod "214b1692-0e40-455b-b526-9621df939595" (UID: "214b1692-0e40-455b-b526-9621df939595"). InnerVolumeSpecName "kube-api-access-cf8zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:56:35 crc kubenswrapper[4482]: I1125 08:56:35.847732 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214b1692-0e40-455b-b526-9621df939595-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "214b1692-0e40-455b-b526-9621df939595" (UID: "214b1692-0e40-455b-b526-9621df939595"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:56:35 crc kubenswrapper[4482]: I1125 08:56:35.881446 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf8zm\" (UniqueName: \"kubernetes.io/projected/214b1692-0e40-455b-b526-9621df939595-kube-api-access-cf8zm\") on node \"crc\" DevicePath \"\"" Nov 25 08:56:35 crc kubenswrapper[4482]: I1125 08:56:35.881478 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214b1692-0e40-455b-b526-9621df939595-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:56:35 crc kubenswrapper[4482]: I1125 08:56:35.881488 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214b1692-0e40-455b-b526-9621df939595-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.117942 4482 generic.go:334] "Generic (PLEG): container finished" podID="214b1692-0e40-455b-b526-9621df939595" containerID="9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e" exitCode=0 Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.117991 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4k9d" event={"ID":"214b1692-0e40-455b-b526-9621df939595","Type":"ContainerDied","Data":"9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e"} Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.118027 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4k9d" event={"ID":"214b1692-0e40-455b-b526-9621df939595","Type":"ContainerDied","Data":"b75eaf20771d7f3ce72a22098c367e24542cb39c10d0acb1f65d31f1fa808c01"} Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.118045 4482 scope.go:117] "RemoveContainer" containerID="9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e" Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.118858 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4k9d" Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.144420 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4k9d"] Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.144689 4482 scope.go:117] "RemoveContainer" containerID="7aac783b6177587c63e259c8a97b5431fe8e68c94a14ee4e65adf54018564de8" Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.152791 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x4k9d"] Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.164340 4482 scope.go:117] "RemoveContainer" containerID="e691237ad61f0f6c7c40621d49cace0f2a4ab9b1a5f310ae1c4d3e8181a9726b" Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.200678 4482 scope.go:117] "RemoveContainer" containerID="9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e" Nov 25 08:56:36 crc kubenswrapper[4482]: E1125 08:56:36.201038 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e\": container with ID starting with 9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e not found: ID does not exist" containerID="9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e" Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.201069 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e"} err="failed to get container status \"9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e\": rpc error: code = NotFound desc = could not find container \"9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e\": container with ID starting with 9b35e128c4936de3a6abd19ffb61bc4aa6b00a64e48af7ff4e9ea79417d4f78e not found: ID does not exist" Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.201098 4482 scope.go:117] "RemoveContainer" containerID="7aac783b6177587c63e259c8a97b5431fe8e68c94a14ee4e65adf54018564de8" Nov 25 08:56:36 crc kubenswrapper[4482]: E1125 08:56:36.201413 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aac783b6177587c63e259c8a97b5431fe8e68c94a14ee4e65adf54018564de8\": container with ID starting with 7aac783b6177587c63e259c8a97b5431fe8e68c94a14ee4e65adf54018564de8 not found: ID does not exist" containerID="7aac783b6177587c63e259c8a97b5431fe8e68c94a14ee4e65adf54018564de8" Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.201453 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aac783b6177587c63e259c8a97b5431fe8e68c94a14ee4e65adf54018564de8"} err="failed to get container status \"7aac783b6177587c63e259c8a97b5431fe8e68c94a14ee4e65adf54018564de8\": rpc error: code = NotFound desc = could not find container \"7aac783b6177587c63e259c8a97b5431fe8e68c94a14ee4e65adf54018564de8\": container with ID starting with 7aac783b6177587c63e259c8a97b5431fe8e68c94a14ee4e65adf54018564de8 not found: ID does not exist" Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.201479 4482 scope.go:117] "RemoveContainer" containerID="e691237ad61f0f6c7c40621d49cace0f2a4ab9b1a5f310ae1c4d3e8181a9726b" Nov 25 08:56:36 crc kubenswrapper[4482]: E1125 08:56:36.201726 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e691237ad61f0f6c7c40621d49cace0f2a4ab9b1a5f310ae1c4d3e8181a9726b\": container with ID starting with e691237ad61f0f6c7c40621d49cace0f2a4ab9b1a5f310ae1c4d3e8181a9726b not found: ID does not exist" containerID="e691237ad61f0f6c7c40621d49cace0f2a4ab9b1a5f310ae1c4d3e8181a9726b" Nov 25 08:56:36 crc kubenswrapper[4482]: I1125 08:56:36.201748 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e691237ad61f0f6c7c40621d49cace0f2a4ab9b1a5f310ae1c4d3e8181a9726b"} err="failed to get container status \"e691237ad61f0f6c7c40621d49cace0f2a4ab9b1a5f310ae1c4d3e8181a9726b\": rpc error: code = NotFound desc = could not find container \"e691237ad61f0f6c7c40621d49cace0f2a4ab9b1a5f310ae1c4d3e8181a9726b\": container with ID starting with e691237ad61f0f6c7c40621d49cace0f2a4ab9b1a5f310ae1c4d3e8181a9726b not found: ID does not exist" Nov 25 08:56:37 crc kubenswrapper[4482]: I1125 08:56:37.839461 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214b1692-0e40-455b-b526-9621df939595" path="/var/lib/kubelet/pods/214b1692-0e40-455b-b526-9621df939595/volumes" Nov 25 08:56:43 crc kubenswrapper[4482]: I1125 08:56:43.831387 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:56:43 crc kubenswrapper[4482]: E1125 08:56:43.831898 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:56:50 crc kubenswrapper[4482]: I1125 08:56:50.836737 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hrq8j"] Nov 25 08:56:50 crc kubenswrapper[4482]: E1125 08:56:50.837428 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214b1692-0e40-455b-b526-9621df939595" containerName="extract-utilities" Nov 25 08:56:50 crc kubenswrapper[4482]: I1125 08:56:50.837440 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="214b1692-0e40-455b-b526-9621df939595" containerName="extract-utilities" Nov 25 08:56:50 crc kubenswrapper[4482]: E1125 08:56:50.837459 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214b1692-0e40-455b-b526-9621df939595" containerName="extract-content" Nov 25 08:56:50 crc kubenswrapper[4482]: I1125 08:56:50.837465 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="214b1692-0e40-455b-b526-9621df939595" containerName="extract-content" Nov 25 08:56:50 crc kubenswrapper[4482]: E1125 08:56:50.837479 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214b1692-0e40-455b-b526-9621df939595" containerName="registry-server" Nov 25 08:56:50 crc kubenswrapper[4482]: I1125 08:56:50.837486 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="214b1692-0e40-455b-b526-9621df939595" containerName="registry-server" Nov 25 08:56:50 crc kubenswrapper[4482]: I1125 08:56:50.837662 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="214b1692-0e40-455b-b526-9621df939595" containerName="registry-server" Nov 25 08:56:50 crc kubenswrapper[4482]: I1125 08:56:50.838868 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:56:50 crc kubenswrapper[4482]: I1125 08:56:50.850106 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hrq8j"] Nov 25 08:56:50 crc kubenswrapper[4482]: I1125 08:56:50.929809 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpxw8\" (UniqueName: \"kubernetes.io/projected/905f3018-14a3-4dc5-90a6-c1b0228e32e7-kube-api-access-zpxw8\") pod \"community-operators-hrq8j\" (UID: \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\") " pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:56:50 crc kubenswrapper[4482]: I1125 08:56:50.929967 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/905f3018-14a3-4dc5-90a6-c1b0228e32e7-utilities\") pod \"community-operators-hrq8j\" (UID: \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\") " pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:56:50 crc kubenswrapper[4482]: I1125 08:56:50.930042 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/905f3018-14a3-4dc5-90a6-c1b0228e32e7-catalog-content\") pod \"community-operators-hrq8j\" (UID: \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\") " pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:56:51 crc kubenswrapper[4482]: I1125 08:56:51.032139 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpxw8\" (UniqueName: \"kubernetes.io/projected/905f3018-14a3-4dc5-90a6-c1b0228e32e7-kube-api-access-zpxw8\") pod \"community-operators-hrq8j\" (UID: \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\") " pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:56:51 crc kubenswrapper[4482]: I1125 08:56:51.032344 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/905f3018-14a3-4dc5-90a6-c1b0228e32e7-utilities\") pod \"community-operators-hrq8j\" (UID: \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\") " pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:56:51 crc kubenswrapper[4482]: I1125 08:56:51.032443 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/905f3018-14a3-4dc5-90a6-c1b0228e32e7-catalog-content\") pod \"community-operators-hrq8j\" (UID: \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\") " pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:56:51 crc kubenswrapper[4482]: I1125 08:56:51.032880 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/905f3018-14a3-4dc5-90a6-c1b0228e32e7-catalog-content\") pod \"community-operators-hrq8j\" (UID: \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\") " pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:56:51 crc kubenswrapper[4482]: I1125 08:56:51.033056 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/905f3018-14a3-4dc5-90a6-c1b0228e32e7-utilities\") pod \"community-operators-hrq8j\" (UID: \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\") " pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:56:51 crc kubenswrapper[4482]: I1125 08:56:51.048861 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpxw8\" (UniqueName: \"kubernetes.io/projected/905f3018-14a3-4dc5-90a6-c1b0228e32e7-kube-api-access-zpxw8\") pod \"community-operators-hrq8j\" (UID: \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\") " pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:56:51 crc kubenswrapper[4482]: I1125 08:56:51.154942 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:56:51 crc kubenswrapper[4482]: I1125 08:56:51.662691 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hrq8j"] Nov 25 08:56:52 crc kubenswrapper[4482]: I1125 08:56:52.252004 4482 generic.go:334] "Generic (PLEG): container finished" podID="905f3018-14a3-4dc5-90a6-c1b0228e32e7" containerID="f716c5cd87fa148612ce018e655b936d8e7a9ff6deb7827cabe57b613cd612e6" exitCode=0 Nov 25 08:56:52 crc kubenswrapper[4482]: I1125 08:56:52.252050 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrq8j" event={"ID":"905f3018-14a3-4dc5-90a6-c1b0228e32e7","Type":"ContainerDied","Data":"f716c5cd87fa148612ce018e655b936d8e7a9ff6deb7827cabe57b613cd612e6"} Nov 25 08:56:52 crc kubenswrapper[4482]: I1125 08:56:52.252356 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrq8j" event={"ID":"905f3018-14a3-4dc5-90a6-c1b0228e32e7","Type":"ContainerStarted","Data":"ae8b43d32cc4cc3110060982a67c74c17c52206da6a39d3d3b3f9fcbdbebf48b"} Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.237947 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pxkw9"] Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.239979 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.248046 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxkw9"] Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.272406 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njk6z\" (UniqueName: \"kubernetes.io/projected/fc3b1759-9f6e-40ac-9682-cc76322e5168-kube-api-access-njk6z\") pod \"redhat-marketplace-pxkw9\" (UID: \"fc3b1759-9f6e-40ac-9682-cc76322e5168\") " pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.272484 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc3b1759-9f6e-40ac-9682-cc76322e5168-utilities\") pod \"redhat-marketplace-pxkw9\" (UID: \"fc3b1759-9f6e-40ac-9682-cc76322e5168\") " pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.272518 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc3b1759-9f6e-40ac-9682-cc76322e5168-catalog-content\") pod \"redhat-marketplace-pxkw9\" (UID: \"fc3b1759-9f6e-40ac-9682-cc76322e5168\") " pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.274449 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrq8j" event={"ID":"905f3018-14a3-4dc5-90a6-c1b0228e32e7","Type":"ContainerStarted","Data":"97645ac573cc5518e1e8175ec5dd6a02c568c67c33975c399e0e30204ff7fcbd"} Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.374064 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc3b1759-9f6e-40ac-9682-cc76322e5168-utilities\") pod \"redhat-marketplace-pxkw9\" (UID: \"fc3b1759-9f6e-40ac-9682-cc76322e5168\") " pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.374600 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc3b1759-9f6e-40ac-9682-cc76322e5168-catalog-content\") pod \"redhat-marketplace-pxkw9\" (UID: \"fc3b1759-9f6e-40ac-9682-cc76322e5168\") " pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.374163 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc3b1759-9f6e-40ac-9682-cc76322e5168-catalog-content\") pod \"redhat-marketplace-pxkw9\" (UID: \"fc3b1759-9f6e-40ac-9682-cc76322e5168\") " pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.374740 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc3b1759-9f6e-40ac-9682-cc76322e5168-utilities\") pod \"redhat-marketplace-pxkw9\" (UID: \"fc3b1759-9f6e-40ac-9682-cc76322e5168\") " pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.374759 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njk6z\" (UniqueName: \"kubernetes.io/projected/fc3b1759-9f6e-40ac-9682-cc76322e5168-kube-api-access-njk6z\") pod \"redhat-marketplace-pxkw9\" (UID: \"fc3b1759-9f6e-40ac-9682-cc76322e5168\") " pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.401051 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njk6z\" (UniqueName: \"kubernetes.io/projected/fc3b1759-9f6e-40ac-9682-cc76322e5168-kube-api-access-njk6z\") pod \"redhat-marketplace-pxkw9\" (UID: \"fc3b1759-9f6e-40ac-9682-cc76322e5168\") " pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:56:53 crc kubenswrapper[4482]: I1125 08:56:53.554387 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:56:54 crc kubenswrapper[4482]: I1125 08:56:54.022711 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxkw9"] Nov 25 08:56:54 crc kubenswrapper[4482]: I1125 08:56:54.284549 4482 generic.go:334] "Generic (PLEG): container finished" podID="fc3b1759-9f6e-40ac-9682-cc76322e5168" containerID="302b76c11fb139791f50551c41a1e31be5c12a5d7b1eb02b1a35b370c198b6a0" exitCode=0 Nov 25 08:56:54 crc kubenswrapper[4482]: I1125 08:56:54.284651 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxkw9" event={"ID":"fc3b1759-9f6e-40ac-9682-cc76322e5168","Type":"ContainerDied","Data":"302b76c11fb139791f50551c41a1e31be5c12a5d7b1eb02b1a35b370c198b6a0"} Nov 25 08:56:54 crc kubenswrapper[4482]: I1125 08:56:54.284874 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxkw9" event={"ID":"fc3b1759-9f6e-40ac-9682-cc76322e5168","Type":"ContainerStarted","Data":"12248fff95d1ac2f1602e9eb82018386aec39298c89d24659e5f3f63faa84ade"} Nov 25 08:56:54 crc kubenswrapper[4482]: I1125 08:56:54.287053 4482 generic.go:334] "Generic (PLEG): container finished" podID="905f3018-14a3-4dc5-90a6-c1b0228e32e7" containerID="97645ac573cc5518e1e8175ec5dd6a02c568c67c33975c399e0e30204ff7fcbd" exitCode=0 Nov 25 08:56:54 crc kubenswrapper[4482]: I1125 08:56:54.287119 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrq8j" event={"ID":"905f3018-14a3-4dc5-90a6-c1b0228e32e7","Type":"ContainerDied","Data":"97645ac573cc5518e1e8175ec5dd6a02c568c67c33975c399e0e30204ff7fcbd"} Nov 25 08:56:55 crc kubenswrapper[4482]: I1125 08:56:55.300149 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxkw9" event={"ID":"fc3b1759-9f6e-40ac-9682-cc76322e5168","Type":"ContainerStarted","Data":"e09ecb719a4e5e8ec597d0263f40f046879f19ea68a6b93e27804351b3912cf1"} Nov 25 08:56:55 crc kubenswrapper[4482]: I1125 08:56:55.303991 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrq8j" event={"ID":"905f3018-14a3-4dc5-90a6-c1b0228e32e7","Type":"ContainerStarted","Data":"c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384"} Nov 25 08:56:55 crc kubenswrapper[4482]: I1125 08:56:55.332509 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hrq8j" podStartSLOduration=2.843490762 podStartE2EDuration="5.332483367s" podCreationTimestamp="2025-11-25 08:56:50 +0000 UTC" firstStartedPulling="2025-11-25 08:56:52.253340114 +0000 UTC m=+7786.741571363" lastFinishedPulling="2025-11-25 08:56:54.742332709 +0000 UTC m=+7789.230563968" observedRunningTime="2025-11-25 08:56:55.330890255 +0000 UTC m=+7789.819121514" watchObservedRunningTime="2025-11-25 08:56:55.332483367 +0000 UTC m=+7789.820714626" Nov 25 08:56:55 crc kubenswrapper[4482]: I1125 08:56:55.838765 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:56:55 crc kubenswrapper[4482]: E1125 08:56:55.839318 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:56:56 crc kubenswrapper[4482]: I1125 08:56:56.319981 4482 generic.go:334] "Generic (PLEG): container finished" podID="fc3b1759-9f6e-40ac-9682-cc76322e5168" containerID="e09ecb719a4e5e8ec597d0263f40f046879f19ea68a6b93e27804351b3912cf1" exitCode=0 Nov 25 08:56:56 crc kubenswrapper[4482]: I1125 08:56:56.320106 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxkw9" event={"ID":"fc3b1759-9f6e-40ac-9682-cc76322e5168","Type":"ContainerDied","Data":"e09ecb719a4e5e8ec597d0263f40f046879f19ea68a6b93e27804351b3912cf1"} Nov 25 08:56:57 crc kubenswrapper[4482]: I1125 08:56:57.332187 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxkw9" event={"ID":"fc3b1759-9f6e-40ac-9682-cc76322e5168","Type":"ContainerStarted","Data":"5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6"} Nov 25 08:56:57 crc kubenswrapper[4482]: I1125 08:56:57.351627 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pxkw9" podStartSLOduration=1.841326477 podStartE2EDuration="4.351609557s" podCreationTimestamp="2025-11-25 08:56:53 +0000 UTC" firstStartedPulling="2025-11-25 08:56:54.287595518 +0000 UTC m=+7788.775826776" lastFinishedPulling="2025-11-25 08:56:56.797878597 +0000 UTC m=+7791.286109856" observedRunningTime="2025-11-25 08:56:57.350641553 +0000 UTC m=+7791.838872812" watchObservedRunningTime="2025-11-25 08:56:57.351609557 +0000 UTC m=+7791.839840806" Nov 25 08:57:01 crc kubenswrapper[4482]: I1125 08:57:01.155320 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:57:01 crc kubenswrapper[4482]: I1125 08:57:01.156942 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:57:01 crc kubenswrapper[4482]: I1125 08:57:01.192277 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:57:01 crc kubenswrapper[4482]: I1125 08:57:01.401927 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:57:01 crc kubenswrapper[4482]: I1125 08:57:01.636028 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hrq8j"] Nov 25 08:57:03 crc kubenswrapper[4482]: I1125 08:57:03.388078 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hrq8j" podUID="905f3018-14a3-4dc5-90a6-c1b0228e32e7" containerName="registry-server" containerID="cri-o://c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384" gracePeriod=2 Nov 25 08:57:03 crc kubenswrapper[4482]: I1125 08:57:03.555181 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:57:03 crc kubenswrapper[4482]: I1125 08:57:03.555611 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:57:03 crc kubenswrapper[4482]: I1125 08:57:03.600646 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:57:03 crc kubenswrapper[4482]: I1125 08:57:03.846236 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.018892 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/905f3018-14a3-4dc5-90a6-c1b0228e32e7-utilities\") pod \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\" (UID: \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\") " Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.019018 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/905f3018-14a3-4dc5-90a6-c1b0228e32e7-catalog-content\") pod \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\" (UID: \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\") " Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.019410 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpxw8\" (UniqueName: \"kubernetes.io/projected/905f3018-14a3-4dc5-90a6-c1b0228e32e7-kube-api-access-zpxw8\") pod \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\" (UID: \"905f3018-14a3-4dc5-90a6-c1b0228e32e7\") " Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.020610 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/905f3018-14a3-4dc5-90a6-c1b0228e32e7-utilities" (OuterVolumeSpecName: "utilities") pod "905f3018-14a3-4dc5-90a6-c1b0228e32e7" (UID: "905f3018-14a3-4dc5-90a6-c1b0228e32e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.026552 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905f3018-14a3-4dc5-90a6-c1b0228e32e7-kube-api-access-zpxw8" (OuterVolumeSpecName: "kube-api-access-zpxw8") pod "905f3018-14a3-4dc5-90a6-c1b0228e32e7" (UID: "905f3018-14a3-4dc5-90a6-c1b0228e32e7"). InnerVolumeSpecName "kube-api-access-zpxw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.056027 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/905f3018-14a3-4dc5-90a6-c1b0228e32e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "905f3018-14a3-4dc5-90a6-c1b0228e32e7" (UID: "905f3018-14a3-4dc5-90a6-c1b0228e32e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.122731 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/905f3018-14a3-4dc5-90a6-c1b0228e32e7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.122765 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpxw8\" (UniqueName: \"kubernetes.io/projected/905f3018-14a3-4dc5-90a6-c1b0228e32e7-kube-api-access-zpxw8\") on node \"crc\" DevicePath \"\"" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.122777 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/905f3018-14a3-4dc5-90a6-c1b0228e32e7-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.398814 4482 generic.go:334] "Generic (PLEG): container finished" podID="905f3018-14a3-4dc5-90a6-c1b0228e32e7" containerID="c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384" exitCode=0 Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.398872 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrq8j" event={"ID":"905f3018-14a3-4dc5-90a6-c1b0228e32e7","Type":"ContainerDied","Data":"c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384"} Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.398929 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrq8j" event={"ID":"905f3018-14a3-4dc5-90a6-c1b0228e32e7","Type":"ContainerDied","Data":"ae8b43d32cc4cc3110060982a67c74c17c52206da6a39d3d3b3f9fcbdbebf48b"} Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.398952 4482 scope.go:117] "RemoveContainer" containerID="c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.398996 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrq8j" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.421103 4482 scope.go:117] "RemoveContainer" containerID="97645ac573cc5518e1e8175ec5dd6a02c568c67c33975c399e0e30204ff7fcbd" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.441502 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hrq8j"] Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.446406 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.453414 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hrq8j"] Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.462372 4482 scope.go:117] "RemoveContainer" containerID="f716c5cd87fa148612ce018e655b936d8e7a9ff6deb7827cabe57b613cd612e6" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.495967 4482 scope.go:117] "RemoveContainer" containerID="c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384" Nov 25 08:57:04 crc kubenswrapper[4482]: E1125 08:57:04.496278 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384\": container with ID starting with c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384 not found: ID does not exist" containerID="c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.496309 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384"} err="failed to get container status \"c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384\": rpc error: code = NotFound desc = could not find container \"c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384\": container with ID starting with c7311b5d5230dbc934e23cfb313d32df7db1b4db109baea4cb521fca79c20384 not found: ID does not exist" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.496329 4482 scope.go:117] "RemoveContainer" containerID="97645ac573cc5518e1e8175ec5dd6a02c568c67c33975c399e0e30204ff7fcbd" Nov 25 08:57:04 crc kubenswrapper[4482]: E1125 08:57:04.496661 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97645ac573cc5518e1e8175ec5dd6a02c568c67c33975c399e0e30204ff7fcbd\": container with ID starting with 97645ac573cc5518e1e8175ec5dd6a02c568c67c33975c399e0e30204ff7fcbd not found: ID does not exist" containerID="97645ac573cc5518e1e8175ec5dd6a02c568c67c33975c399e0e30204ff7fcbd" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.496722 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97645ac573cc5518e1e8175ec5dd6a02c568c67c33975c399e0e30204ff7fcbd"} err="failed to get container status \"97645ac573cc5518e1e8175ec5dd6a02c568c67c33975c399e0e30204ff7fcbd\": rpc error: code = NotFound desc = could not find container \"97645ac573cc5518e1e8175ec5dd6a02c568c67c33975c399e0e30204ff7fcbd\": container with ID starting with 97645ac573cc5518e1e8175ec5dd6a02c568c67c33975c399e0e30204ff7fcbd not found: ID does not exist" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.496779 4482 scope.go:117] "RemoveContainer" containerID="f716c5cd87fa148612ce018e655b936d8e7a9ff6deb7827cabe57b613cd612e6" Nov 25 08:57:04 crc kubenswrapper[4482]: E1125 08:57:04.497030 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f716c5cd87fa148612ce018e655b936d8e7a9ff6deb7827cabe57b613cd612e6\": container with ID starting with f716c5cd87fa148612ce018e655b936d8e7a9ff6deb7827cabe57b613cd612e6 not found: ID does not exist" containerID="f716c5cd87fa148612ce018e655b936d8e7a9ff6deb7827cabe57b613cd612e6" Nov 25 08:57:04 crc kubenswrapper[4482]: I1125 08:57:04.497067 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f716c5cd87fa148612ce018e655b936d8e7a9ff6deb7827cabe57b613cd612e6"} err="failed to get container status \"f716c5cd87fa148612ce018e655b936d8e7a9ff6deb7827cabe57b613cd612e6\": rpc error: code = NotFound desc = could not find container \"f716c5cd87fa148612ce018e655b936d8e7a9ff6deb7827cabe57b613cd612e6\": container with ID starting with f716c5cd87fa148612ce018e655b936d8e7a9ff6deb7827cabe57b613cd612e6 not found: ID does not exist" Nov 25 08:57:05 crc kubenswrapper[4482]: I1125 08:57:05.847712 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="905f3018-14a3-4dc5-90a6-c1b0228e32e7" path="/var/lib/kubelet/pods/905f3018-14a3-4dc5-90a6-c1b0228e32e7/volumes" Nov 25 08:57:06 crc kubenswrapper[4482]: I1125 08:57:06.835766 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxkw9"] Nov 25 08:57:06 crc kubenswrapper[4482]: I1125 08:57:06.836025 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pxkw9" podUID="fc3b1759-9f6e-40ac-9682-cc76322e5168" containerName="registry-server" containerID="cri-o://5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6" gracePeriod=2 Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.266407 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.285891 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc3b1759-9f6e-40ac-9682-cc76322e5168-utilities\") pod \"fc3b1759-9f6e-40ac-9682-cc76322e5168\" (UID: \"fc3b1759-9f6e-40ac-9682-cc76322e5168\") " Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.286436 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc3b1759-9f6e-40ac-9682-cc76322e5168-catalog-content\") pod \"fc3b1759-9f6e-40ac-9682-cc76322e5168\" (UID: \"fc3b1759-9f6e-40ac-9682-cc76322e5168\") " Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.286609 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc3b1759-9f6e-40ac-9682-cc76322e5168-utilities" (OuterVolumeSpecName: "utilities") pod "fc3b1759-9f6e-40ac-9682-cc76322e5168" (UID: "fc3b1759-9f6e-40ac-9682-cc76322e5168"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.286768 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njk6z\" (UniqueName: \"kubernetes.io/projected/fc3b1759-9f6e-40ac-9682-cc76322e5168-kube-api-access-njk6z\") pod \"fc3b1759-9f6e-40ac-9682-cc76322e5168\" (UID: \"fc3b1759-9f6e-40ac-9682-cc76322e5168\") " Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.287614 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc3b1759-9f6e-40ac-9682-cc76322e5168-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.302646 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3b1759-9f6e-40ac-9682-cc76322e5168-kube-api-access-njk6z" (OuterVolumeSpecName: "kube-api-access-njk6z") pod "fc3b1759-9f6e-40ac-9682-cc76322e5168" (UID: "fc3b1759-9f6e-40ac-9682-cc76322e5168"). InnerVolumeSpecName "kube-api-access-njk6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.310983 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc3b1759-9f6e-40ac-9682-cc76322e5168-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc3b1759-9f6e-40ac-9682-cc76322e5168" (UID: "fc3b1759-9f6e-40ac-9682-cc76322e5168"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.390921 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc3b1759-9f6e-40ac-9682-cc76322e5168-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.391289 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njk6z\" (UniqueName: \"kubernetes.io/projected/fc3b1759-9f6e-40ac-9682-cc76322e5168-kube-api-access-njk6z\") on node \"crc\" DevicePath \"\"" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.434835 4482 generic.go:334] "Generic (PLEG): container finished" podID="fc3b1759-9f6e-40ac-9682-cc76322e5168" containerID="5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6" exitCode=0 Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.434904 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxkw9" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.434932 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxkw9" event={"ID":"fc3b1759-9f6e-40ac-9682-cc76322e5168","Type":"ContainerDied","Data":"5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6"} Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.435010 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxkw9" event={"ID":"fc3b1759-9f6e-40ac-9682-cc76322e5168","Type":"ContainerDied","Data":"12248fff95d1ac2f1602e9eb82018386aec39298c89d24659e5f3f63faa84ade"} Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.435053 4482 scope.go:117] "RemoveContainer" containerID="5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.462870 4482 scope.go:117] "RemoveContainer" containerID="e09ecb719a4e5e8ec597d0263f40f046879f19ea68a6b93e27804351b3912cf1" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.481766 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxkw9"] Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.488842 4482 scope.go:117] "RemoveContainer" containerID="302b76c11fb139791f50551c41a1e31be5c12a5d7b1eb02b1a35b370c198b6a0" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.490729 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxkw9"] Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.536217 4482 scope.go:117] "RemoveContainer" containerID="5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6" Nov 25 08:57:07 crc kubenswrapper[4482]: E1125 08:57:07.536581 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6\": container with ID starting with 5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6 not found: ID does not exist" containerID="5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.536614 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6"} err="failed to get container status \"5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6\": rpc error: code = NotFound desc = could not find container \"5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6\": container with ID starting with 5e82f96285d41dce1fc5c57e2d5ab4f666e814b9fc0c0fbd3d732a024885b8c6 not found: ID does not exist" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.536632 4482 scope.go:117] "RemoveContainer" containerID="e09ecb719a4e5e8ec597d0263f40f046879f19ea68a6b93e27804351b3912cf1" Nov 25 08:57:07 crc kubenswrapper[4482]: E1125 08:57:07.537009 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09ecb719a4e5e8ec597d0263f40f046879f19ea68a6b93e27804351b3912cf1\": container with ID starting with e09ecb719a4e5e8ec597d0263f40f046879f19ea68a6b93e27804351b3912cf1 not found: ID does not exist" containerID="e09ecb719a4e5e8ec597d0263f40f046879f19ea68a6b93e27804351b3912cf1" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.537056 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09ecb719a4e5e8ec597d0263f40f046879f19ea68a6b93e27804351b3912cf1"} err="failed to get container status \"e09ecb719a4e5e8ec597d0263f40f046879f19ea68a6b93e27804351b3912cf1\": rpc error: code = NotFound desc = could not find container \"e09ecb719a4e5e8ec597d0263f40f046879f19ea68a6b93e27804351b3912cf1\": container with ID starting with e09ecb719a4e5e8ec597d0263f40f046879f19ea68a6b93e27804351b3912cf1 not found: ID does not exist" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.537102 4482 scope.go:117] "RemoveContainer" containerID="302b76c11fb139791f50551c41a1e31be5c12a5d7b1eb02b1a35b370c198b6a0" Nov 25 08:57:07 crc kubenswrapper[4482]: E1125 08:57:07.537474 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"302b76c11fb139791f50551c41a1e31be5c12a5d7b1eb02b1a35b370c198b6a0\": container with ID starting with 302b76c11fb139791f50551c41a1e31be5c12a5d7b1eb02b1a35b370c198b6a0 not found: ID does not exist" containerID="302b76c11fb139791f50551c41a1e31be5c12a5d7b1eb02b1a35b370c198b6a0" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.537498 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302b76c11fb139791f50551c41a1e31be5c12a5d7b1eb02b1a35b370c198b6a0"} err="failed to get container status \"302b76c11fb139791f50551c41a1e31be5c12a5d7b1eb02b1a35b370c198b6a0\": rpc error: code = NotFound desc = could not find container \"302b76c11fb139791f50551c41a1e31be5c12a5d7b1eb02b1a35b370c198b6a0\": container with ID starting with 302b76c11fb139791f50551c41a1e31be5c12a5d7b1eb02b1a35b370c198b6a0 not found: ID does not exist" Nov 25 08:57:07 crc kubenswrapper[4482]: I1125 08:57:07.841770 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc3b1759-9f6e-40ac-9682-cc76322e5168" path="/var/lib/kubelet/pods/fc3b1759-9f6e-40ac-9682-cc76322e5168/volumes" Nov 25 08:57:10 crc kubenswrapper[4482]: I1125 08:57:10.832525 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:57:10 crc kubenswrapper[4482]: E1125 08:57:10.833845 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:57:25 crc kubenswrapper[4482]: I1125 08:57:25.836992 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:57:25 crc kubenswrapper[4482]: E1125 08:57:25.837866 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:57:36 crc kubenswrapper[4482]: I1125 08:57:36.831458 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:57:36 crc kubenswrapper[4482]: E1125 08:57:36.832262 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:57:47 crc kubenswrapper[4482]: I1125 08:57:47.832220 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:57:47 crc kubenswrapper[4482]: E1125 08:57:47.838416 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:58:01 crc kubenswrapper[4482]: I1125 08:58:01.831029 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:58:01 crc kubenswrapper[4482]: E1125 08:58:01.832009 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:58:12 crc kubenswrapper[4482]: I1125 08:58:12.830412 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:58:12 crc kubenswrapper[4482]: E1125 08:58:12.831726 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:58:23 crc kubenswrapper[4482]: I1125 08:58:23.832349 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:58:23 crc kubenswrapper[4482]: E1125 08:58:23.833437 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:58:38 crc kubenswrapper[4482]: I1125 08:58:38.831201 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:58:38 crc kubenswrapper[4482]: E1125 08:58:38.832145 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:58:52 crc kubenswrapper[4482]: I1125 08:58:52.831903 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:58:52 crc kubenswrapper[4482]: E1125 08:58:52.832738 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:59:03 crc kubenswrapper[4482]: I1125 08:59:03.830470 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:59:03 crc kubenswrapper[4482]: E1125 08:59:03.833094 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:59:14 crc kubenswrapper[4482]: I1125 08:59:14.830810 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:59:14 crc kubenswrapper[4482]: E1125 08:59:14.831742 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:59:29 crc kubenswrapper[4482]: I1125 08:59:29.830525 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:59:29 crc kubenswrapper[4482]: E1125 08:59:29.831227 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:59:40 crc kubenswrapper[4482]: I1125 08:59:40.831303 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:59:40 crc kubenswrapper[4482]: E1125 08:59:40.832042 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 08:59:53 crc kubenswrapper[4482]: I1125 08:59:53.830690 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 08:59:53 crc kubenswrapper[4482]: E1125 08:59:53.831498 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.165974 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl"] Nov 25 09:00:00 crc kubenswrapper[4482]: E1125 09:00:00.166899 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3b1759-9f6e-40ac-9682-cc76322e5168" containerName="registry-server" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.166912 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3b1759-9f6e-40ac-9682-cc76322e5168" containerName="registry-server" Nov 25 09:00:00 crc kubenswrapper[4482]: E1125 09:00:00.166923 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905f3018-14a3-4dc5-90a6-c1b0228e32e7" containerName="extract-utilities" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.166928 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="905f3018-14a3-4dc5-90a6-c1b0228e32e7" containerName="extract-utilities" Nov 25 09:00:00 crc kubenswrapper[4482]: E1125 09:00:00.166935 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3b1759-9f6e-40ac-9682-cc76322e5168" containerName="extract-content" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.166940 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3b1759-9f6e-40ac-9682-cc76322e5168" containerName="extract-content" Nov 25 09:00:00 crc kubenswrapper[4482]: E1125 09:00:00.166952 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3b1759-9f6e-40ac-9682-cc76322e5168" containerName="extract-utilities" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.166957 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3b1759-9f6e-40ac-9682-cc76322e5168" containerName="extract-utilities" Nov 25 09:00:00 crc kubenswrapper[4482]: E1125 09:00:00.166966 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905f3018-14a3-4dc5-90a6-c1b0228e32e7" containerName="registry-server" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.166970 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="905f3018-14a3-4dc5-90a6-c1b0228e32e7" containerName="registry-server" Nov 25 09:00:00 crc kubenswrapper[4482]: E1125 09:00:00.166985 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905f3018-14a3-4dc5-90a6-c1b0228e32e7" containerName="extract-content" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.166991 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="905f3018-14a3-4dc5-90a6-c1b0228e32e7" containerName="extract-content" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.167140 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc3b1759-9f6e-40ac-9682-cc76322e5168" containerName="registry-server" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.167158 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="905f3018-14a3-4dc5-90a6-c1b0228e32e7" containerName="registry-server" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.168417 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.177004 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f04bc89c-9ef9-4487-a504-ad8b9dc91025-config-volume\") pod \"collect-profiles-29401020-8ssnl\" (UID: \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.177143 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4ljf\" (UniqueName: \"kubernetes.io/projected/f04bc89c-9ef9-4487-a504-ad8b9dc91025-kube-api-access-k4ljf\") pod \"collect-profiles-29401020-8ssnl\" (UID: \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.177407 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f04bc89c-9ef9-4487-a504-ad8b9dc91025-secret-volume\") pod \"collect-profiles-29401020-8ssnl\" (UID: \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.182909 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl"] Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.183776 4482 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.192578 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.279718 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f04bc89c-9ef9-4487-a504-ad8b9dc91025-secret-volume\") pod \"collect-profiles-29401020-8ssnl\" (UID: \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.279839 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f04bc89c-9ef9-4487-a504-ad8b9dc91025-config-volume\") pod \"collect-profiles-29401020-8ssnl\" (UID: \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.279868 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4ljf\" (UniqueName: \"kubernetes.io/projected/f04bc89c-9ef9-4487-a504-ad8b9dc91025-kube-api-access-k4ljf\") pod \"collect-profiles-29401020-8ssnl\" (UID: \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.281096 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f04bc89c-9ef9-4487-a504-ad8b9dc91025-config-volume\") pod \"collect-profiles-29401020-8ssnl\" (UID: \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.286252 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f04bc89c-9ef9-4487-a504-ad8b9dc91025-secret-volume\") pod \"collect-profiles-29401020-8ssnl\" (UID: \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.294633 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4ljf\" (UniqueName: \"kubernetes.io/projected/f04bc89c-9ef9-4487-a504-ad8b9dc91025-kube-api-access-k4ljf\") pod \"collect-profiles-29401020-8ssnl\" (UID: \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.504991 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.934602 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl"] Nov 25 09:00:00 crc kubenswrapper[4482]: I1125 09:00:00.955292 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" event={"ID":"f04bc89c-9ef9-4487-a504-ad8b9dc91025","Type":"ContainerStarted","Data":"2c62273972611156ca0c6e89ba2d1b9ed035202a29eaa4b3811078f749668525"} Nov 25 09:00:01 crc kubenswrapper[4482]: I1125 09:00:01.963949 4482 generic.go:334] "Generic (PLEG): container finished" podID="f04bc89c-9ef9-4487-a504-ad8b9dc91025" containerID="d13b8e152b81d1cfc9ff319d0519ae9d49a3da67fa933875267ad139c2b0a1db" exitCode=0 Nov 25 09:00:01 crc kubenswrapper[4482]: I1125 09:00:01.964052 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" event={"ID":"f04bc89c-9ef9-4487-a504-ad8b9dc91025","Type":"ContainerDied","Data":"d13b8e152b81d1cfc9ff319d0519ae9d49a3da67fa933875267ad139c2b0a1db"} Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.302022 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.451802 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4ljf\" (UniqueName: \"kubernetes.io/projected/f04bc89c-9ef9-4487-a504-ad8b9dc91025-kube-api-access-k4ljf\") pod \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\" (UID: \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\") " Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.452019 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f04bc89c-9ef9-4487-a504-ad8b9dc91025-secret-volume\") pod \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\" (UID: \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\") " Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.452255 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f04bc89c-9ef9-4487-a504-ad8b9dc91025-config-volume\") pod \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\" (UID: \"f04bc89c-9ef9-4487-a504-ad8b9dc91025\") " Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.453256 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f04bc89c-9ef9-4487-a504-ad8b9dc91025-config-volume" (OuterVolumeSpecName: "config-volume") pod "f04bc89c-9ef9-4487-a504-ad8b9dc91025" (UID: "f04bc89c-9ef9-4487-a504-ad8b9dc91025"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.459258 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f04bc89c-9ef9-4487-a504-ad8b9dc91025-kube-api-access-k4ljf" (OuterVolumeSpecName: "kube-api-access-k4ljf") pod "f04bc89c-9ef9-4487-a504-ad8b9dc91025" (UID: "f04bc89c-9ef9-4487-a504-ad8b9dc91025"). InnerVolumeSpecName "kube-api-access-k4ljf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.459284 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f04bc89c-9ef9-4487-a504-ad8b9dc91025-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f04bc89c-9ef9-4487-a504-ad8b9dc91025" (UID: "f04bc89c-9ef9-4487-a504-ad8b9dc91025"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.554790 4482 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f04bc89c-9ef9-4487-a504-ad8b9dc91025-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.554826 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4ljf\" (UniqueName: \"kubernetes.io/projected/f04bc89c-9ef9-4487-a504-ad8b9dc91025-kube-api-access-k4ljf\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.554841 4482 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f04bc89c-9ef9-4487-a504-ad8b9dc91025-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.982473 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" event={"ID":"f04bc89c-9ef9-4487-a504-ad8b9dc91025","Type":"ContainerDied","Data":"2c62273972611156ca0c6e89ba2d1b9ed035202a29eaa4b3811078f749668525"} Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.982508 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-8ssnl" Nov 25 09:00:03 crc kubenswrapper[4482]: I1125 09:00:03.982854 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c62273972611156ca0c6e89ba2d1b9ed035202a29eaa4b3811078f749668525" Nov 25 09:00:04 crc kubenswrapper[4482]: I1125 09:00:04.386997 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp"] Nov 25 09:00:04 crc kubenswrapper[4482]: I1125 09:00:04.397075 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-4k4bp"] Nov 25 09:00:05 crc kubenswrapper[4482]: I1125 09:00:05.839388 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 09:00:05 crc kubenswrapper[4482]: E1125 09:00:05.840113 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:00:05 crc kubenswrapper[4482]: I1125 09:00:05.843479 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92afaaee-b11e-4bce-9967-673ca19b70f0" path="/var/lib/kubelet/pods/92afaaee-b11e-4bce-9967-673ca19b70f0/volumes" Nov 25 09:00:20 crc kubenswrapper[4482]: I1125 09:00:20.831203 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 09:00:21 crc kubenswrapper[4482]: I1125 09:00:21.120556 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"d815cfb71cd821f80a05f5726025f6ed7e62391c4548400f2df38b4128253c78"} Nov 25 09:00:42 crc kubenswrapper[4482]: I1125 09:00:42.394095 4482 scope.go:117] "RemoveContainer" containerID="fd73fa1ef7e4667f049473daa082825beac83f6420f85f7078f9ad786d27c83c" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.138354 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29401021-76pgn"] Nov 25 09:01:00 crc kubenswrapper[4482]: E1125 09:01:00.140400 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04bc89c-9ef9-4487-a504-ad8b9dc91025" containerName="collect-profiles" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.140437 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04bc89c-9ef9-4487-a504-ad8b9dc91025" containerName="collect-profiles" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.140594 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04bc89c-9ef9-4487-a504-ad8b9dc91025" containerName="collect-profiles" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.141575 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.145548 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-fernet-keys\") pod \"keystone-cron-29401021-76pgn\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.145765 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-config-data\") pod \"keystone-cron-29401021-76pgn\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.145830 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-combined-ca-bundle\") pod \"keystone-cron-29401021-76pgn\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.145909 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2g5l\" (UniqueName: \"kubernetes.io/projected/51563f91-ccc7-4339-a2f2-497bc070e77d-kube-api-access-t2g5l\") pod \"keystone-cron-29401021-76pgn\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.150740 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401021-76pgn"] Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.247675 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-fernet-keys\") pod \"keystone-cron-29401021-76pgn\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.247777 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-config-data\") pod \"keystone-cron-29401021-76pgn\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.247805 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-combined-ca-bundle\") pod \"keystone-cron-29401021-76pgn\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.247840 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2g5l\" (UniqueName: \"kubernetes.io/projected/51563f91-ccc7-4339-a2f2-497bc070e77d-kube-api-access-t2g5l\") pod \"keystone-cron-29401021-76pgn\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.252384 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-config-data\") pod \"keystone-cron-29401021-76pgn\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.252877 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-combined-ca-bundle\") pod \"keystone-cron-29401021-76pgn\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.275943 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2g5l\" (UniqueName: \"kubernetes.io/projected/51563f91-ccc7-4339-a2f2-497bc070e77d-kube-api-access-t2g5l\") pod \"keystone-cron-29401021-76pgn\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.308780 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-fernet-keys\") pod \"keystone-cron-29401021-76pgn\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.473848 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:00 crc kubenswrapper[4482]: I1125 09:01:00.866140 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401021-76pgn"] Nov 25 09:01:00 crc kubenswrapper[4482]: W1125 09:01:00.867037 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51563f91_ccc7_4339_a2f2_497bc070e77d.slice/crio-59a72a91421747d5aad7d2b07fb2ef38c49354a1a036ad07fb3ab13e34a6347b WatchSource:0}: Error finding container 59a72a91421747d5aad7d2b07fb2ef38c49354a1a036ad07fb3ab13e34a6347b: Status 404 returned error can't find the container with id 59a72a91421747d5aad7d2b07fb2ef38c49354a1a036ad07fb3ab13e34a6347b Nov 25 09:01:01 crc kubenswrapper[4482]: I1125 09:01:01.408378 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401021-76pgn" event={"ID":"51563f91-ccc7-4339-a2f2-497bc070e77d","Type":"ContainerStarted","Data":"8c4ec4da4f8f42fbb3caffb62f0e40dfe6afb5e49c1dff8d5af4a8302a377b8b"} Nov 25 09:01:01 crc kubenswrapper[4482]: I1125 09:01:01.409657 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401021-76pgn" event={"ID":"51563f91-ccc7-4339-a2f2-497bc070e77d","Type":"ContainerStarted","Data":"59a72a91421747d5aad7d2b07fb2ef38c49354a1a036ad07fb3ab13e34a6347b"} Nov 25 09:01:01 crc kubenswrapper[4482]: I1125 09:01:01.429790 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29401021-76pgn" podStartSLOduration=1.429776116 podStartE2EDuration="1.429776116s" podCreationTimestamp="2025-11-25 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:01:01.424093036 +0000 UTC m=+8035.912324295" watchObservedRunningTime="2025-11-25 09:01:01.429776116 +0000 UTC m=+8035.918007375" Nov 25 09:01:03 crc kubenswrapper[4482]: I1125 09:01:03.421944 4482 generic.go:334] "Generic (PLEG): container finished" podID="51563f91-ccc7-4339-a2f2-497bc070e77d" containerID="8c4ec4da4f8f42fbb3caffb62f0e40dfe6afb5e49c1dff8d5af4a8302a377b8b" exitCode=0 Nov 25 09:01:03 crc kubenswrapper[4482]: I1125 09:01:03.422014 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401021-76pgn" event={"ID":"51563f91-ccc7-4339-a2f2-497bc070e77d","Type":"ContainerDied","Data":"8c4ec4da4f8f42fbb3caffb62f0e40dfe6afb5e49c1dff8d5af4a8302a377b8b"} Nov 25 09:01:04 crc kubenswrapper[4482]: I1125 09:01:04.783249 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:01:04 crc kubenswrapper[4482]: I1125 09:01:04.932729 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-combined-ca-bundle\") pod \"51563f91-ccc7-4339-a2f2-497bc070e77d\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " Nov 25 09:01:04 crc kubenswrapper[4482]: I1125 09:01:04.933020 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2g5l\" (UniqueName: \"kubernetes.io/projected/51563f91-ccc7-4339-a2f2-497bc070e77d-kube-api-access-t2g5l\") pod \"51563f91-ccc7-4339-a2f2-497bc070e77d\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " Nov 25 09:01:04 crc kubenswrapper[4482]: I1125 09:01:04.933078 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-fernet-keys\") pod \"51563f91-ccc7-4339-a2f2-497bc070e77d\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " Nov 25 09:01:04 crc kubenswrapper[4482]: I1125 09:01:04.933142 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-config-data\") pod \"51563f91-ccc7-4339-a2f2-497bc070e77d\" (UID: \"51563f91-ccc7-4339-a2f2-497bc070e77d\") " Nov 25 09:01:04 crc kubenswrapper[4482]: I1125 09:01:04.938834 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "51563f91-ccc7-4339-a2f2-497bc070e77d" (UID: "51563f91-ccc7-4339-a2f2-497bc070e77d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:04 crc kubenswrapper[4482]: I1125 09:01:04.938876 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51563f91-ccc7-4339-a2f2-497bc070e77d-kube-api-access-t2g5l" (OuterVolumeSpecName: "kube-api-access-t2g5l") pod "51563f91-ccc7-4339-a2f2-497bc070e77d" (UID: "51563f91-ccc7-4339-a2f2-497bc070e77d"). InnerVolumeSpecName "kube-api-access-t2g5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:01:04 crc kubenswrapper[4482]: I1125 09:01:04.958330 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51563f91-ccc7-4339-a2f2-497bc070e77d" (UID: "51563f91-ccc7-4339-a2f2-497bc070e77d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:04 crc kubenswrapper[4482]: I1125 09:01:04.976908 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-config-data" (OuterVolumeSpecName: "config-data") pod "51563f91-ccc7-4339-a2f2-497bc070e77d" (UID: "51563f91-ccc7-4339-a2f2-497bc070e77d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:05 crc kubenswrapper[4482]: I1125 09:01:05.035736 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2g5l\" (UniqueName: \"kubernetes.io/projected/51563f91-ccc7-4339-a2f2-497bc070e77d-kube-api-access-t2g5l\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:05 crc kubenswrapper[4482]: I1125 09:01:05.035765 4482 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:05 crc kubenswrapper[4482]: I1125 09:01:05.035775 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:05 crc kubenswrapper[4482]: I1125 09:01:05.035783 4482 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51563f91-ccc7-4339-a2f2-497bc070e77d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:05 crc kubenswrapper[4482]: I1125 09:01:05.436162 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401021-76pgn" event={"ID":"51563f91-ccc7-4339-a2f2-497bc070e77d","Type":"ContainerDied","Data":"59a72a91421747d5aad7d2b07fb2ef38c49354a1a036ad07fb3ab13e34a6347b"} Nov 25 09:01:05 crc kubenswrapper[4482]: I1125 09:01:05.436385 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59a72a91421747d5aad7d2b07fb2ef38c49354a1a036ad07fb3ab13e34a6347b" Nov 25 09:01:05 crc kubenswrapper[4482]: I1125 09:01:05.436223 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401021-76pgn" Nov 25 09:02:39 crc kubenswrapper[4482]: I1125 09:02:39.118350 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:02:39 crc kubenswrapper[4482]: I1125 09:02:39.119012 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:03:09 crc kubenswrapper[4482]: I1125 09:03:09.117307 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:03:09 crc kubenswrapper[4482]: I1125 09:03:09.117604 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:03:39 crc kubenswrapper[4482]: I1125 09:03:39.117712 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:03:39 crc kubenswrapper[4482]: I1125 09:03:39.118371 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:03:39 crc kubenswrapper[4482]: I1125 09:03:39.118417 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 09:03:39 crc kubenswrapper[4482]: I1125 09:03:39.118949 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d815cfb71cd821f80a05f5726025f6ed7e62391c4548400f2df38b4128253c78"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:03:39 crc kubenswrapper[4482]: I1125 09:03:39.118997 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://d815cfb71cd821f80a05f5726025f6ed7e62391c4548400f2df38b4128253c78" gracePeriod=600 Nov 25 09:03:39 crc kubenswrapper[4482]: I1125 09:03:39.632830 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="d815cfb71cd821f80a05f5726025f6ed7e62391c4548400f2df38b4128253c78" exitCode=0 Nov 25 09:03:39 crc kubenswrapper[4482]: I1125 09:03:39.632891 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"d815cfb71cd821f80a05f5726025f6ed7e62391c4548400f2df38b4128253c78"} Nov 25 09:03:39 crc kubenswrapper[4482]: I1125 09:03:39.633108 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5"} Nov 25 09:03:39 crc kubenswrapper[4482]: I1125 09:03:39.633142 4482 scope.go:117] "RemoveContainer" containerID="3b957d641f9cb281ec432ee4b91138a0ae7ea1abc28bb732f5f1ba46d6526e40" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.595725 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bvs4p"] Nov 25 09:04:06 crc kubenswrapper[4482]: E1125 09:04:06.596366 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51563f91-ccc7-4339-a2f2-497bc070e77d" containerName="keystone-cron" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.596378 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="51563f91-ccc7-4339-a2f2-497bc070e77d" containerName="keystone-cron" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.596549 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="51563f91-ccc7-4339-a2f2-497bc070e77d" containerName="keystone-cron" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.597830 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.606403 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bvs4p"] Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.724116 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9t22\" (UniqueName: \"kubernetes.io/projected/a3b224ed-6c72-437f-b012-4071e2b63fd4-kube-api-access-m9t22\") pod \"certified-operators-bvs4p\" (UID: \"a3b224ed-6c72-437f-b012-4071e2b63fd4\") " pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.724441 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b224ed-6c72-437f-b012-4071e2b63fd4-catalog-content\") pod \"certified-operators-bvs4p\" (UID: \"a3b224ed-6c72-437f-b012-4071e2b63fd4\") " pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.724655 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b224ed-6c72-437f-b012-4071e2b63fd4-utilities\") pod \"certified-operators-bvs4p\" (UID: \"a3b224ed-6c72-437f-b012-4071e2b63fd4\") " pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.826889 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b224ed-6c72-437f-b012-4071e2b63fd4-catalog-content\") pod \"certified-operators-bvs4p\" (UID: \"a3b224ed-6c72-437f-b012-4071e2b63fd4\") " pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.826986 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b224ed-6c72-437f-b012-4071e2b63fd4-utilities\") pod \"certified-operators-bvs4p\" (UID: \"a3b224ed-6c72-437f-b012-4071e2b63fd4\") " pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.827073 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9t22\" (UniqueName: \"kubernetes.io/projected/a3b224ed-6c72-437f-b012-4071e2b63fd4-kube-api-access-m9t22\") pod \"certified-operators-bvs4p\" (UID: \"a3b224ed-6c72-437f-b012-4071e2b63fd4\") " pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.827383 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b224ed-6c72-437f-b012-4071e2b63fd4-catalog-content\") pod \"certified-operators-bvs4p\" (UID: \"a3b224ed-6c72-437f-b012-4071e2b63fd4\") " pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.827520 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b224ed-6c72-437f-b012-4071e2b63fd4-utilities\") pod \"certified-operators-bvs4p\" (UID: \"a3b224ed-6c72-437f-b012-4071e2b63fd4\") " pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.846145 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9t22\" (UniqueName: \"kubernetes.io/projected/a3b224ed-6c72-437f-b012-4071e2b63fd4-kube-api-access-m9t22\") pod \"certified-operators-bvs4p\" (UID: \"a3b224ed-6c72-437f-b012-4071e2b63fd4\") " pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:06 crc kubenswrapper[4482]: I1125 09:04:06.918164 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:07 crc kubenswrapper[4482]: I1125 09:04:07.384246 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bvs4p"] Nov 25 09:04:07 crc kubenswrapper[4482]: I1125 09:04:07.828195 4482 generic.go:334] "Generic (PLEG): container finished" podID="a3b224ed-6c72-437f-b012-4071e2b63fd4" containerID="eaf9365e5fc8fd0c6a6af55bc53b0b784ee708d381fdbf73921abdd4c1fa33d9" exitCode=0 Nov 25 09:04:07 crc kubenswrapper[4482]: I1125 09:04:07.828289 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvs4p" event={"ID":"a3b224ed-6c72-437f-b012-4071e2b63fd4","Type":"ContainerDied","Data":"eaf9365e5fc8fd0c6a6af55bc53b0b784ee708d381fdbf73921abdd4c1fa33d9"} Nov 25 09:04:07 crc kubenswrapper[4482]: I1125 09:04:07.828641 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvs4p" event={"ID":"a3b224ed-6c72-437f-b012-4071e2b63fd4","Type":"ContainerStarted","Data":"4c7865d8bfa14b06d97f7de6b213ebba21330dca7122366b39424112832f00de"} Nov 25 09:04:07 crc kubenswrapper[4482]: I1125 09:04:07.829928 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:04:13 crc kubenswrapper[4482]: I1125 09:04:13.881348 4482 generic.go:334] "Generic (PLEG): container finished" podID="a3b224ed-6c72-437f-b012-4071e2b63fd4" containerID="13ac5a4d2285197e61d880660701b532517fa54783afd299fd834ea725b69c94" exitCode=0 Nov 25 09:04:13 crc kubenswrapper[4482]: I1125 09:04:13.881387 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvs4p" event={"ID":"a3b224ed-6c72-437f-b012-4071e2b63fd4","Type":"ContainerDied","Data":"13ac5a4d2285197e61d880660701b532517fa54783afd299fd834ea725b69c94"} Nov 25 09:04:14 crc kubenswrapper[4482]: I1125 09:04:14.890276 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bvs4p" event={"ID":"a3b224ed-6c72-437f-b012-4071e2b63fd4","Type":"ContainerStarted","Data":"94e576291da529fe3a189c056f94d45047a0c80c8aacaaa4f1f65042846e1713"} Nov 25 09:04:14 crc kubenswrapper[4482]: I1125 09:04:14.905594 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bvs4p" podStartSLOduration=2.338484069 podStartE2EDuration="8.905580339s" podCreationTimestamp="2025-11-25 09:04:06 +0000 UTC" firstStartedPulling="2025-11-25 09:04:07.829700728 +0000 UTC m=+8222.317931987" lastFinishedPulling="2025-11-25 09:04:14.396796997 +0000 UTC m=+8228.885028257" observedRunningTime="2025-11-25 09:04:14.904124545 +0000 UTC m=+8229.392355804" watchObservedRunningTime="2025-11-25 09:04:14.905580339 +0000 UTC m=+8229.393811598" Nov 25 09:04:16 crc kubenswrapper[4482]: I1125 09:04:16.918494 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:16 crc kubenswrapper[4482]: I1125 09:04:16.918705 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:17 crc kubenswrapper[4482]: I1125 09:04:17.952066 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bvs4p" podUID="a3b224ed-6c72-437f-b012-4071e2b63fd4" containerName="registry-server" probeResult="failure" output=< Nov 25 09:04:17 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 09:04:17 crc kubenswrapper[4482]: > Nov 25 09:04:26 crc kubenswrapper[4482]: I1125 09:04:26.951652 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:26 crc kubenswrapper[4482]: I1125 09:04:26.988842 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bvs4p" Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.040411 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bvs4p"] Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.184283 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8fl5h"] Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.184487 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8fl5h" podUID="a409f14f-4cf5-467e-afec-1fd121548e05" containerName="registry-server" containerID="cri-o://b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27" gracePeriod=2 Nov 25 09:04:27 crc kubenswrapper[4482]: E1125 09:04:27.755872 4482 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27 is running failed: container process not found" containerID="b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 09:04:27 crc kubenswrapper[4482]: E1125 09:04:27.756442 4482 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27 is running failed: container process not found" containerID="b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 09:04:27 crc kubenswrapper[4482]: E1125 09:04:27.756672 4482 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27 is running failed: container process not found" containerID="b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 09:04:27 crc kubenswrapper[4482]: E1125 09:04:27.756711 4482 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-8fl5h" podUID="a409f14f-4cf5-467e-afec-1fd121548e05" containerName="registry-server" Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.795196 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.979039 4482 generic.go:334] "Generic (PLEG): container finished" podID="a409f14f-4cf5-467e-afec-1fd121548e05" containerID="b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27" exitCode=0 Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.979125 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fl5h" Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.979145 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fl5h" event={"ID":"a409f14f-4cf5-467e-afec-1fd121548e05","Type":"ContainerDied","Data":"b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27"} Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.979524 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fl5h" event={"ID":"a409f14f-4cf5-467e-afec-1fd121548e05","Type":"ContainerDied","Data":"7238740badbe682b154295542787e238fdf9823452227557e0eb5262881ef791"} Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.979561 4482 scope.go:117] "RemoveContainer" containerID="b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27" Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.984404 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngjvj\" (UniqueName: \"kubernetes.io/projected/a409f14f-4cf5-467e-afec-1fd121548e05-kube-api-access-ngjvj\") pod \"a409f14f-4cf5-467e-afec-1fd121548e05\" (UID: \"a409f14f-4cf5-467e-afec-1fd121548e05\") " Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.984499 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a409f14f-4cf5-467e-afec-1fd121548e05-utilities\") pod \"a409f14f-4cf5-467e-afec-1fd121548e05\" (UID: \"a409f14f-4cf5-467e-afec-1fd121548e05\") " Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.984713 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a409f14f-4cf5-467e-afec-1fd121548e05-catalog-content\") pod \"a409f14f-4cf5-467e-afec-1fd121548e05\" (UID: \"a409f14f-4cf5-467e-afec-1fd121548e05\") " Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.985628 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a409f14f-4cf5-467e-afec-1fd121548e05-utilities" (OuterVolumeSpecName: "utilities") pod "a409f14f-4cf5-467e-afec-1fd121548e05" (UID: "a409f14f-4cf5-467e-afec-1fd121548e05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:04:27 crc kubenswrapper[4482]: I1125 09:04:27.995753 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a409f14f-4cf5-467e-afec-1fd121548e05-kube-api-access-ngjvj" (OuterVolumeSpecName: "kube-api-access-ngjvj") pod "a409f14f-4cf5-467e-afec-1fd121548e05" (UID: "a409f14f-4cf5-467e-afec-1fd121548e05"). InnerVolumeSpecName "kube-api-access-ngjvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.012837 4482 scope.go:117] "RemoveContainer" containerID="0a288784bff3a6795669c69c85854b1f0d1d0ae43e0fc440678468442d1f8e99" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.046724 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a409f14f-4cf5-467e-afec-1fd121548e05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a409f14f-4cf5-467e-afec-1fd121548e05" (UID: "a409f14f-4cf5-467e-afec-1fd121548e05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.050981 4482 scope.go:117] "RemoveContainer" containerID="2e9a8c82c10f90841418d044f0389365f145a6e8417ab992c268e566e5147e56" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.076370 4482 scope.go:117] "RemoveContainer" containerID="b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27" Nov 25 09:04:28 crc kubenswrapper[4482]: E1125 09:04:28.076989 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27\": container with ID starting with b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27 not found: ID does not exist" containerID="b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.077082 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27"} err="failed to get container status \"b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27\": rpc error: code = NotFound desc = could not find container \"b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27\": container with ID starting with b9d0770d8a15a340de6eecce940e2189d67aeb71de45259c8c5c315251662e27 not found: ID does not exist" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.077155 4482 scope.go:117] "RemoveContainer" containerID="0a288784bff3a6795669c69c85854b1f0d1d0ae43e0fc440678468442d1f8e99" Nov 25 09:04:28 crc kubenswrapper[4482]: E1125 09:04:28.077694 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a288784bff3a6795669c69c85854b1f0d1d0ae43e0fc440678468442d1f8e99\": container with ID starting with 0a288784bff3a6795669c69c85854b1f0d1d0ae43e0fc440678468442d1f8e99 not found: ID does not exist" containerID="0a288784bff3a6795669c69c85854b1f0d1d0ae43e0fc440678468442d1f8e99" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.077729 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a288784bff3a6795669c69c85854b1f0d1d0ae43e0fc440678468442d1f8e99"} err="failed to get container status \"0a288784bff3a6795669c69c85854b1f0d1d0ae43e0fc440678468442d1f8e99\": rpc error: code = NotFound desc = could not find container \"0a288784bff3a6795669c69c85854b1f0d1d0ae43e0fc440678468442d1f8e99\": container with ID starting with 0a288784bff3a6795669c69c85854b1f0d1d0ae43e0fc440678468442d1f8e99 not found: ID does not exist" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.077751 4482 scope.go:117] "RemoveContainer" containerID="2e9a8c82c10f90841418d044f0389365f145a6e8417ab992c268e566e5147e56" Nov 25 09:04:28 crc kubenswrapper[4482]: E1125 09:04:28.077939 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e9a8c82c10f90841418d044f0389365f145a6e8417ab992c268e566e5147e56\": container with ID starting with 2e9a8c82c10f90841418d044f0389365f145a6e8417ab992c268e566e5147e56 not found: ID does not exist" containerID="2e9a8c82c10f90841418d044f0389365f145a6e8417ab992c268e566e5147e56" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.077960 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e9a8c82c10f90841418d044f0389365f145a6e8417ab992c268e566e5147e56"} err="failed to get container status \"2e9a8c82c10f90841418d044f0389365f145a6e8417ab992c268e566e5147e56\": rpc error: code = NotFound desc = could not find container \"2e9a8c82c10f90841418d044f0389365f145a6e8417ab992c268e566e5147e56\": container with ID starting with 2e9a8c82c10f90841418d044f0389365f145a6e8417ab992c268e566e5147e56 not found: ID does not exist" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.087272 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a409f14f-4cf5-467e-afec-1fd121548e05-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.087296 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a409f14f-4cf5-467e-afec-1fd121548e05-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.087306 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngjvj\" (UniqueName: \"kubernetes.io/projected/a409f14f-4cf5-467e-afec-1fd121548e05-kube-api-access-ngjvj\") on node \"crc\" DevicePath \"\"" Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.341967 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8fl5h"] Nov 25 09:04:28 crc kubenswrapper[4482]: I1125 09:04:28.347982 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8fl5h"] Nov 25 09:04:29 crc kubenswrapper[4482]: I1125 09:04:29.841113 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a409f14f-4cf5-467e-afec-1fd121548e05" path="/var/lib/kubelet/pods/a409f14f-4cf5-467e-afec-1fd121548e05/volumes" Nov 25 09:05:39 crc kubenswrapper[4482]: I1125 09:05:39.118114 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:05:39 crc kubenswrapper[4482]: I1125 09:05:39.118537 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:06:09 crc kubenswrapper[4482]: I1125 09:06:09.117993 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:06:09 crc kubenswrapper[4482]: I1125 09:06:09.118496 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:06:39 crc kubenswrapper[4482]: I1125 09:06:39.117354 4482 patch_prober.go:28] interesting pod/machine-config-daemon-p4qzz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:06:39 crc kubenswrapper[4482]: I1125 09:06:39.117968 4482 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:06:39 crc kubenswrapper[4482]: I1125 09:06:39.118023 4482 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" Nov 25 09:06:39 crc kubenswrapper[4482]: I1125 09:06:39.118877 4482 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5"} pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:06:39 crc kubenswrapper[4482]: I1125 09:06:39.118926 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerName="machine-config-daemon" containerID="cri-o://32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" gracePeriod=600 Nov 25 09:06:39 crc kubenswrapper[4482]: E1125 09:06:39.238493 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:06:39 crc kubenswrapper[4482]: I1125 09:06:39.964398 4482 generic.go:334] "Generic (PLEG): container finished" podID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" exitCode=0 Nov 25 09:06:39 crc kubenswrapper[4482]: I1125 09:06:39.964864 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerDied","Data":"32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5"} Nov 25 09:06:39 crc kubenswrapper[4482]: I1125 09:06:39.964925 4482 scope.go:117] "RemoveContainer" containerID="d815cfb71cd821f80a05f5726025f6ed7e62391c4548400f2df38b4128253c78" Nov 25 09:06:39 crc kubenswrapper[4482]: I1125 09:06:39.966093 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:06:39 crc kubenswrapper[4482]: E1125 09:06:39.966418 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:06:51 crc kubenswrapper[4482]: I1125 09:06:51.830296 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:06:51 crc kubenswrapper[4482]: E1125 09:06:51.830833 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:07:05 crc kubenswrapper[4482]: I1125 09:07:05.835566 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:07:05 crc kubenswrapper[4482]: E1125 09:07:05.836070 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:07:17 crc kubenswrapper[4482]: I1125 09:07:17.831310 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:07:17 crc kubenswrapper[4482]: E1125 09:07:17.831848 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.626902 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8njc4"] Nov 25 09:07:19 crc kubenswrapper[4482]: E1125 09:07:19.627817 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a409f14f-4cf5-467e-afec-1fd121548e05" containerName="registry-server" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.627838 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="a409f14f-4cf5-467e-afec-1fd121548e05" containerName="registry-server" Nov 25 09:07:19 crc kubenswrapper[4482]: E1125 09:07:19.627859 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a409f14f-4cf5-467e-afec-1fd121548e05" containerName="extract-content" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.627866 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="a409f14f-4cf5-467e-afec-1fd121548e05" containerName="extract-content" Nov 25 09:07:19 crc kubenswrapper[4482]: E1125 09:07:19.627876 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a409f14f-4cf5-467e-afec-1fd121548e05" containerName="extract-utilities" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.627882 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="a409f14f-4cf5-467e-afec-1fd121548e05" containerName="extract-utilities" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.628486 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="a409f14f-4cf5-467e-afec-1fd121548e05" containerName="registry-server" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.629691 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.635708 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8njc4"] Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.694921 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e915ea-b68b-403e-9617-0f918359a839-utilities\") pod \"community-operators-8njc4\" (UID: \"18e915ea-b68b-403e-9617-0f918359a839\") " pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.695227 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7x69\" (UniqueName: \"kubernetes.io/projected/18e915ea-b68b-403e-9617-0f918359a839-kube-api-access-s7x69\") pod \"community-operators-8njc4\" (UID: \"18e915ea-b68b-403e-9617-0f918359a839\") " pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.695365 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e915ea-b68b-403e-9617-0f918359a839-catalog-content\") pod \"community-operators-8njc4\" (UID: \"18e915ea-b68b-403e-9617-0f918359a839\") " pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.796883 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7x69\" (UniqueName: \"kubernetes.io/projected/18e915ea-b68b-403e-9617-0f918359a839-kube-api-access-s7x69\") pod \"community-operators-8njc4\" (UID: \"18e915ea-b68b-403e-9617-0f918359a839\") " pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.797011 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e915ea-b68b-403e-9617-0f918359a839-catalog-content\") pod \"community-operators-8njc4\" (UID: \"18e915ea-b68b-403e-9617-0f918359a839\") " pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.797059 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e915ea-b68b-403e-9617-0f918359a839-utilities\") pod \"community-operators-8njc4\" (UID: \"18e915ea-b68b-403e-9617-0f918359a839\") " pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.797507 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e915ea-b68b-403e-9617-0f918359a839-catalog-content\") pod \"community-operators-8njc4\" (UID: \"18e915ea-b68b-403e-9617-0f918359a839\") " pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.797555 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e915ea-b68b-403e-9617-0f918359a839-utilities\") pod \"community-operators-8njc4\" (UID: \"18e915ea-b68b-403e-9617-0f918359a839\") " pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.815816 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7x69\" (UniqueName: \"kubernetes.io/projected/18e915ea-b68b-403e-9617-0f918359a839-kube-api-access-s7x69\") pod \"community-operators-8njc4\" (UID: \"18e915ea-b68b-403e-9617-0f918359a839\") " pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:19 crc kubenswrapper[4482]: I1125 09:07:19.945609 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:20 crc kubenswrapper[4482]: I1125 09:07:20.602872 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8njc4"] Nov 25 09:07:20 crc kubenswrapper[4482]: W1125 09:07:20.619596 4482 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18e915ea_b68b_403e_9617_0f918359a839.slice/crio-789667b1dd1121382d7677692af460ef58f92e489f490b412743d269276b1aa9 WatchSource:0}: Error finding container 789667b1dd1121382d7677692af460ef58f92e489f490b412743d269276b1aa9: Status 404 returned error can't find the container with id 789667b1dd1121382d7677692af460ef58f92e489f490b412743d269276b1aa9 Nov 25 09:07:21 crc kubenswrapper[4482]: I1125 09:07:21.248340 4482 generic.go:334] "Generic (PLEG): container finished" podID="18e915ea-b68b-403e-9617-0f918359a839" containerID="6b9a7af692d2d6e5317b0d0c599fc0417029d4b0f236b40b15930db4df9d770f" exitCode=0 Nov 25 09:07:21 crc kubenswrapper[4482]: I1125 09:07:21.248439 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8njc4" event={"ID":"18e915ea-b68b-403e-9617-0f918359a839","Type":"ContainerDied","Data":"6b9a7af692d2d6e5317b0d0c599fc0417029d4b0f236b40b15930db4df9d770f"} Nov 25 09:07:21 crc kubenswrapper[4482]: I1125 09:07:21.248541 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8njc4" event={"ID":"18e915ea-b68b-403e-9617-0f918359a839","Type":"ContainerStarted","Data":"789667b1dd1121382d7677692af460ef58f92e489f490b412743d269276b1aa9"} Nov 25 09:07:22 crc kubenswrapper[4482]: I1125 09:07:22.257045 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8njc4" event={"ID":"18e915ea-b68b-403e-9617-0f918359a839","Type":"ContainerStarted","Data":"594aa2be184076dbddcf7ada0e40da2c8a3647d58ad26472d07e51150a6f9f6c"} Nov 25 09:07:23 crc kubenswrapper[4482]: I1125 09:07:23.264641 4482 generic.go:334] "Generic (PLEG): container finished" podID="18e915ea-b68b-403e-9617-0f918359a839" containerID="594aa2be184076dbddcf7ada0e40da2c8a3647d58ad26472d07e51150a6f9f6c" exitCode=0 Nov 25 09:07:23 crc kubenswrapper[4482]: I1125 09:07:23.264675 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8njc4" event={"ID":"18e915ea-b68b-403e-9617-0f918359a839","Type":"ContainerDied","Data":"594aa2be184076dbddcf7ada0e40da2c8a3647d58ad26472d07e51150a6f9f6c"} Nov 25 09:07:24 crc kubenswrapper[4482]: I1125 09:07:24.273333 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8njc4" event={"ID":"18e915ea-b68b-403e-9617-0f918359a839","Type":"ContainerStarted","Data":"de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509"} Nov 25 09:07:24 crc kubenswrapper[4482]: I1125 09:07:24.288677 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8njc4" podStartSLOduration=2.645825489 podStartE2EDuration="5.288664455s" podCreationTimestamp="2025-11-25 09:07:19 +0000 UTC" firstStartedPulling="2025-11-25 09:07:21.249778319 +0000 UTC m=+8415.738009578" lastFinishedPulling="2025-11-25 09:07:23.892617285 +0000 UTC m=+8418.380848544" observedRunningTime="2025-11-25 09:07:24.285304153 +0000 UTC m=+8418.773535411" watchObservedRunningTime="2025-11-25 09:07:24.288664455 +0000 UTC m=+8418.776895715" Nov 25 09:07:29 crc kubenswrapper[4482]: I1125 09:07:29.831238 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:07:29 crc kubenswrapper[4482]: E1125 09:07:29.831750 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:07:29 crc kubenswrapper[4482]: I1125 09:07:29.946623 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:29 crc kubenswrapper[4482]: I1125 09:07:29.946661 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:29 crc kubenswrapper[4482]: I1125 09:07:29.985203 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:30 crc kubenswrapper[4482]: I1125 09:07:30.348137 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:30 crc kubenswrapper[4482]: I1125 09:07:30.385795 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8njc4"] Nov 25 09:07:32 crc kubenswrapper[4482]: I1125 09:07:32.326231 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8njc4" podUID="18e915ea-b68b-403e-9617-0f918359a839" containerName="registry-server" containerID="cri-o://de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509" gracePeriod=2 Nov 25 09:07:32 crc kubenswrapper[4482]: I1125 09:07:32.795838 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:32 crc kubenswrapper[4482]: I1125 09:07:32.922989 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7x69\" (UniqueName: \"kubernetes.io/projected/18e915ea-b68b-403e-9617-0f918359a839-kube-api-access-s7x69\") pod \"18e915ea-b68b-403e-9617-0f918359a839\" (UID: \"18e915ea-b68b-403e-9617-0f918359a839\") " Nov 25 09:07:32 crc kubenswrapper[4482]: I1125 09:07:32.923731 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e915ea-b68b-403e-9617-0f918359a839-catalog-content\") pod \"18e915ea-b68b-403e-9617-0f918359a839\" (UID: \"18e915ea-b68b-403e-9617-0f918359a839\") " Nov 25 09:07:32 crc kubenswrapper[4482]: I1125 09:07:32.923800 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e915ea-b68b-403e-9617-0f918359a839-utilities\") pod \"18e915ea-b68b-403e-9617-0f918359a839\" (UID: \"18e915ea-b68b-403e-9617-0f918359a839\") " Nov 25 09:07:32 crc kubenswrapper[4482]: I1125 09:07:32.925761 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18e915ea-b68b-403e-9617-0f918359a839-utilities" (OuterVolumeSpecName: "utilities") pod "18e915ea-b68b-403e-9617-0f918359a839" (UID: "18e915ea-b68b-403e-9617-0f918359a839"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:07:32 crc kubenswrapper[4482]: I1125 09:07:32.933677 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e915ea-b68b-403e-9617-0f918359a839-kube-api-access-s7x69" (OuterVolumeSpecName: "kube-api-access-s7x69") pod "18e915ea-b68b-403e-9617-0f918359a839" (UID: "18e915ea-b68b-403e-9617-0f918359a839"). InnerVolumeSpecName "kube-api-access-s7x69". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:07:32 crc kubenswrapper[4482]: I1125 09:07:32.973023 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18e915ea-b68b-403e-9617-0f918359a839-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18e915ea-b68b-403e-9617-0f918359a839" (UID: "18e915ea-b68b-403e-9617-0f918359a839"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.028316 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e915ea-b68b-403e-9617-0f918359a839-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.028351 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e915ea-b68b-403e-9617-0f918359a839-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.028362 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7x69\" (UniqueName: \"kubernetes.io/projected/18e915ea-b68b-403e-9617-0f918359a839-kube-api-access-s7x69\") on node \"crc\" DevicePath \"\"" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.333986 4482 generic.go:334] "Generic (PLEG): container finished" podID="18e915ea-b68b-403e-9617-0f918359a839" containerID="de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509" exitCode=0 Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.334037 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8njc4" event={"ID":"18e915ea-b68b-403e-9617-0f918359a839","Type":"ContainerDied","Data":"de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509"} Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.334104 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8njc4" event={"ID":"18e915ea-b68b-403e-9617-0f918359a839","Type":"ContainerDied","Data":"789667b1dd1121382d7677692af460ef58f92e489f490b412743d269276b1aa9"} Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.334129 4482 scope.go:117] "RemoveContainer" containerID="de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.334164 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8njc4" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.360120 4482 scope.go:117] "RemoveContainer" containerID="594aa2be184076dbddcf7ada0e40da2c8a3647d58ad26472d07e51150a6f9f6c" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.372864 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8njc4"] Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.377795 4482 scope.go:117] "RemoveContainer" containerID="6b9a7af692d2d6e5317b0d0c599fc0417029d4b0f236b40b15930db4df9d770f" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.386339 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8njc4"] Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.416698 4482 scope.go:117] "RemoveContainer" containerID="de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509" Nov 25 09:07:33 crc kubenswrapper[4482]: E1125 09:07:33.417087 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509\": container with ID starting with de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509 not found: ID does not exist" containerID="de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.417128 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509"} err="failed to get container status \"de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509\": rpc error: code = NotFound desc = could not find container \"de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509\": container with ID starting with de2b5a61f32d04f87c2f73ffe34ba6fbdb11682e54d18faec43f121729f5e509 not found: ID does not exist" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.417157 4482 scope.go:117] "RemoveContainer" containerID="594aa2be184076dbddcf7ada0e40da2c8a3647d58ad26472d07e51150a6f9f6c" Nov 25 09:07:33 crc kubenswrapper[4482]: E1125 09:07:33.417682 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"594aa2be184076dbddcf7ada0e40da2c8a3647d58ad26472d07e51150a6f9f6c\": container with ID starting with 594aa2be184076dbddcf7ada0e40da2c8a3647d58ad26472d07e51150a6f9f6c not found: ID does not exist" containerID="594aa2be184076dbddcf7ada0e40da2c8a3647d58ad26472d07e51150a6f9f6c" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.417713 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594aa2be184076dbddcf7ada0e40da2c8a3647d58ad26472d07e51150a6f9f6c"} err="failed to get container status \"594aa2be184076dbddcf7ada0e40da2c8a3647d58ad26472d07e51150a6f9f6c\": rpc error: code = NotFound desc = could not find container \"594aa2be184076dbddcf7ada0e40da2c8a3647d58ad26472d07e51150a6f9f6c\": container with ID starting with 594aa2be184076dbddcf7ada0e40da2c8a3647d58ad26472d07e51150a6f9f6c not found: ID does not exist" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.417786 4482 scope.go:117] "RemoveContainer" containerID="6b9a7af692d2d6e5317b0d0c599fc0417029d4b0f236b40b15930db4df9d770f" Nov 25 09:07:33 crc kubenswrapper[4482]: E1125 09:07:33.418103 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b9a7af692d2d6e5317b0d0c599fc0417029d4b0f236b40b15930db4df9d770f\": container with ID starting with 6b9a7af692d2d6e5317b0d0c599fc0417029d4b0f236b40b15930db4df9d770f not found: ID does not exist" containerID="6b9a7af692d2d6e5317b0d0c599fc0417029d4b0f236b40b15930db4df9d770f" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.418249 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b9a7af692d2d6e5317b0d0c599fc0417029d4b0f236b40b15930db4df9d770f"} err="failed to get container status \"6b9a7af692d2d6e5317b0d0c599fc0417029d4b0f236b40b15930db4df9d770f\": rpc error: code = NotFound desc = could not find container \"6b9a7af692d2d6e5317b0d0c599fc0417029d4b0f236b40b15930db4df9d770f\": container with ID starting with 6b9a7af692d2d6e5317b0d0c599fc0417029d4b0f236b40b15930db4df9d770f not found: ID does not exist" Nov 25 09:07:33 crc kubenswrapper[4482]: I1125 09:07:33.841427 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e915ea-b68b-403e-9617-0f918359a839" path="/var/lib/kubelet/pods/18e915ea-b68b-403e-9617-0f918359a839/volumes" Nov 25 09:07:43 crc kubenswrapper[4482]: I1125 09:07:43.831578 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:07:43 crc kubenswrapper[4482]: E1125 09:07:43.832278 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:07:54 crc kubenswrapper[4482]: I1125 09:07:54.830935 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:07:54 crc kubenswrapper[4482]: E1125 09:07:54.832361 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:08:09 crc kubenswrapper[4482]: I1125 09:08:09.833933 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:08:09 crc kubenswrapper[4482]: E1125 09:08:09.835531 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.668412 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fh5bt"] Nov 25 09:08:19 crc kubenswrapper[4482]: E1125 09:08:19.669208 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e915ea-b68b-403e-9617-0f918359a839" containerName="extract-content" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.669221 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e915ea-b68b-403e-9617-0f918359a839" containerName="extract-content" Nov 25 09:08:19 crc kubenswrapper[4482]: E1125 09:08:19.669258 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e915ea-b68b-403e-9617-0f918359a839" containerName="extract-utilities" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.669265 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e915ea-b68b-403e-9617-0f918359a839" containerName="extract-utilities" Nov 25 09:08:19 crc kubenswrapper[4482]: E1125 09:08:19.669288 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e915ea-b68b-403e-9617-0f918359a839" containerName="registry-server" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.669294 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e915ea-b68b-403e-9617-0f918359a839" containerName="registry-server" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.669455 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e915ea-b68b-403e-9617-0f918359a839" containerName="registry-server" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.670858 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.678163 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fh5bt"] Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.742648 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2addd4-e5c8-42bc-a660-03a0dca18b37-utilities\") pod \"redhat-marketplace-fh5bt\" (UID: \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\") " pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.742860 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2addd4-e5c8-42bc-a660-03a0dca18b37-catalog-content\") pod \"redhat-marketplace-fh5bt\" (UID: \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\") " pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.742920 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8x2t\" (UniqueName: \"kubernetes.io/projected/9b2addd4-e5c8-42bc-a660-03a0dca18b37-kube-api-access-z8x2t\") pod \"redhat-marketplace-fh5bt\" (UID: \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\") " pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.844182 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2addd4-e5c8-42bc-a660-03a0dca18b37-catalog-content\") pod \"redhat-marketplace-fh5bt\" (UID: \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\") " pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.844233 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8x2t\" (UniqueName: \"kubernetes.io/projected/9b2addd4-e5c8-42bc-a660-03a0dca18b37-kube-api-access-z8x2t\") pod \"redhat-marketplace-fh5bt\" (UID: \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\") " pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.844318 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2addd4-e5c8-42bc-a660-03a0dca18b37-utilities\") pod \"redhat-marketplace-fh5bt\" (UID: \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\") " pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.844730 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2addd4-e5c8-42bc-a660-03a0dca18b37-utilities\") pod \"redhat-marketplace-fh5bt\" (UID: \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\") " pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.844916 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2addd4-e5c8-42bc-a660-03a0dca18b37-catalog-content\") pod \"redhat-marketplace-fh5bt\" (UID: \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\") " pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.860989 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8x2t\" (UniqueName: \"kubernetes.io/projected/9b2addd4-e5c8-42bc-a660-03a0dca18b37-kube-api-access-z8x2t\") pod \"redhat-marketplace-fh5bt\" (UID: \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\") " pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:19 crc kubenswrapper[4482]: I1125 09:08:19.986954 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:20 crc kubenswrapper[4482]: I1125 09:08:20.410824 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fh5bt"] Nov 25 09:08:20 crc kubenswrapper[4482]: I1125 09:08:20.712833 4482 generic.go:334] "Generic (PLEG): container finished" podID="9b2addd4-e5c8-42bc-a660-03a0dca18b37" containerID="3c7f7de4139372ff539efc695ec3eba6bc7996337341b0eef1c2df145b2def05" exitCode=0 Nov 25 09:08:20 crc kubenswrapper[4482]: I1125 09:08:20.712921 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh5bt" event={"ID":"9b2addd4-e5c8-42bc-a660-03a0dca18b37","Type":"ContainerDied","Data":"3c7f7de4139372ff539efc695ec3eba6bc7996337341b0eef1c2df145b2def05"} Nov 25 09:08:20 crc kubenswrapper[4482]: I1125 09:08:20.713050 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh5bt" event={"ID":"9b2addd4-e5c8-42bc-a660-03a0dca18b37","Type":"ContainerStarted","Data":"f90d683e97c6c92ca5659885a4cd970c62fe63d695f78746ca0c64ff3f590389"} Nov 25 09:08:21 crc kubenswrapper[4482]: I1125 09:08:21.722005 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh5bt" event={"ID":"9b2addd4-e5c8-42bc-a660-03a0dca18b37","Type":"ContainerStarted","Data":"949f96b2b6907f3e4c420c3504784975cc2d71fa8b1f26859419f91f87a91691"} Nov 25 09:08:22 crc kubenswrapper[4482]: I1125 09:08:22.732128 4482 generic.go:334] "Generic (PLEG): container finished" podID="9b2addd4-e5c8-42bc-a660-03a0dca18b37" containerID="949f96b2b6907f3e4c420c3504784975cc2d71fa8b1f26859419f91f87a91691" exitCode=0 Nov 25 09:08:22 crc kubenswrapper[4482]: I1125 09:08:22.732203 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh5bt" event={"ID":"9b2addd4-e5c8-42bc-a660-03a0dca18b37","Type":"ContainerDied","Data":"949f96b2b6907f3e4c420c3504784975cc2d71fa8b1f26859419f91f87a91691"} Nov 25 09:08:23 crc kubenswrapper[4482]: I1125 09:08:23.741825 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh5bt" event={"ID":"9b2addd4-e5c8-42bc-a660-03a0dca18b37","Type":"ContainerStarted","Data":"214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1"} Nov 25 09:08:23 crc kubenswrapper[4482]: I1125 09:08:23.766723 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fh5bt" podStartSLOduration=2.169688021 podStartE2EDuration="4.766709976s" podCreationTimestamp="2025-11-25 09:08:19 +0000 UTC" firstStartedPulling="2025-11-25 09:08:20.714895834 +0000 UTC m=+8475.203127093" lastFinishedPulling="2025-11-25 09:08:23.311917788 +0000 UTC m=+8477.800149048" observedRunningTime="2025-11-25 09:08:23.7608707 +0000 UTC m=+8478.249101959" watchObservedRunningTime="2025-11-25 09:08:23.766709976 +0000 UTC m=+8478.254941234" Nov 25 09:08:24 crc kubenswrapper[4482]: I1125 09:08:24.831474 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:08:24 crc kubenswrapper[4482]: E1125 09:08:24.831826 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:08:29 crc kubenswrapper[4482]: I1125 09:08:29.987938 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:29 crc kubenswrapper[4482]: I1125 09:08:29.988301 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:30 crc kubenswrapper[4482]: I1125 09:08:30.023232 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:30 crc kubenswrapper[4482]: I1125 09:08:30.814355 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:30 crc kubenswrapper[4482]: I1125 09:08:30.849057 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fh5bt"] Nov 25 09:08:32 crc kubenswrapper[4482]: I1125 09:08:32.798550 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fh5bt" podUID="9b2addd4-e5c8-42bc-a660-03a0dca18b37" containerName="registry-server" containerID="cri-o://214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1" gracePeriod=2 Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.208950 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.365840 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8x2t\" (UniqueName: \"kubernetes.io/projected/9b2addd4-e5c8-42bc-a660-03a0dca18b37-kube-api-access-z8x2t\") pod \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\" (UID: \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\") " Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.365918 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2addd4-e5c8-42bc-a660-03a0dca18b37-catalog-content\") pod \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\" (UID: \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\") " Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.365983 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2addd4-e5c8-42bc-a660-03a0dca18b37-utilities\") pod \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\" (UID: \"9b2addd4-e5c8-42bc-a660-03a0dca18b37\") " Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.366781 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b2addd4-e5c8-42bc-a660-03a0dca18b37-utilities" (OuterVolumeSpecName: "utilities") pod "9b2addd4-e5c8-42bc-a660-03a0dca18b37" (UID: "9b2addd4-e5c8-42bc-a660-03a0dca18b37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.372362 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2addd4-e5c8-42bc-a660-03a0dca18b37-kube-api-access-z8x2t" (OuterVolumeSpecName: "kube-api-access-z8x2t") pod "9b2addd4-e5c8-42bc-a660-03a0dca18b37" (UID: "9b2addd4-e5c8-42bc-a660-03a0dca18b37"). InnerVolumeSpecName "kube-api-access-z8x2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.379043 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b2addd4-e5c8-42bc-a660-03a0dca18b37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b2addd4-e5c8-42bc-a660-03a0dca18b37" (UID: "9b2addd4-e5c8-42bc-a660-03a0dca18b37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.470545 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8x2t\" (UniqueName: \"kubernetes.io/projected/9b2addd4-e5c8-42bc-a660-03a0dca18b37-kube-api-access-z8x2t\") on node \"crc\" DevicePath \"\"" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.470580 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2addd4-e5c8-42bc-a660-03a0dca18b37-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.470590 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2addd4-e5c8-42bc-a660-03a0dca18b37-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.808100 4482 generic.go:334] "Generic (PLEG): container finished" podID="9b2addd4-e5c8-42bc-a660-03a0dca18b37" containerID="214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1" exitCode=0 Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.808229 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fh5bt" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.808227 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh5bt" event={"ID":"9b2addd4-e5c8-42bc-a660-03a0dca18b37","Type":"ContainerDied","Data":"214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1"} Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.808944 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh5bt" event={"ID":"9b2addd4-e5c8-42bc-a660-03a0dca18b37","Type":"ContainerDied","Data":"f90d683e97c6c92ca5659885a4cd970c62fe63d695f78746ca0c64ff3f590389"} Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.808966 4482 scope.go:117] "RemoveContainer" containerID="214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.827747 4482 scope.go:117] "RemoveContainer" containerID="949f96b2b6907f3e4c420c3504784975cc2d71fa8b1f26859419f91f87a91691" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.859932 4482 scope.go:117] "RemoveContainer" containerID="3c7f7de4139372ff539efc695ec3eba6bc7996337341b0eef1c2df145b2def05" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.862184 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fh5bt"] Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.874854 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fh5bt"] Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.893413 4482 scope.go:117] "RemoveContainer" containerID="214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1" Nov 25 09:08:33 crc kubenswrapper[4482]: E1125 09:08:33.893829 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1\": container with ID starting with 214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1 not found: ID does not exist" containerID="214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.893860 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1"} err="failed to get container status \"214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1\": rpc error: code = NotFound desc = could not find container \"214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1\": container with ID starting with 214b19e4f08b134d4a48f5c3ea5962ffa56ac8b63ad78417a1e317ecfd4bcbd1 not found: ID does not exist" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.893880 4482 scope.go:117] "RemoveContainer" containerID="949f96b2b6907f3e4c420c3504784975cc2d71fa8b1f26859419f91f87a91691" Nov 25 09:08:33 crc kubenswrapper[4482]: E1125 09:08:33.894134 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"949f96b2b6907f3e4c420c3504784975cc2d71fa8b1f26859419f91f87a91691\": container with ID starting with 949f96b2b6907f3e4c420c3504784975cc2d71fa8b1f26859419f91f87a91691 not found: ID does not exist" containerID="949f96b2b6907f3e4c420c3504784975cc2d71fa8b1f26859419f91f87a91691" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.894161 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"949f96b2b6907f3e4c420c3504784975cc2d71fa8b1f26859419f91f87a91691"} err="failed to get container status \"949f96b2b6907f3e4c420c3504784975cc2d71fa8b1f26859419f91f87a91691\": rpc error: code = NotFound desc = could not find container \"949f96b2b6907f3e4c420c3504784975cc2d71fa8b1f26859419f91f87a91691\": container with ID starting with 949f96b2b6907f3e4c420c3504784975cc2d71fa8b1f26859419f91f87a91691 not found: ID does not exist" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.894223 4482 scope.go:117] "RemoveContainer" containerID="3c7f7de4139372ff539efc695ec3eba6bc7996337341b0eef1c2df145b2def05" Nov 25 09:08:33 crc kubenswrapper[4482]: E1125 09:08:33.894499 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c7f7de4139372ff539efc695ec3eba6bc7996337341b0eef1c2df145b2def05\": container with ID starting with 3c7f7de4139372ff539efc695ec3eba6bc7996337341b0eef1c2df145b2def05 not found: ID does not exist" containerID="3c7f7de4139372ff539efc695ec3eba6bc7996337341b0eef1c2df145b2def05" Nov 25 09:08:33 crc kubenswrapper[4482]: I1125 09:08:33.894527 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c7f7de4139372ff539efc695ec3eba6bc7996337341b0eef1c2df145b2def05"} err="failed to get container status \"3c7f7de4139372ff539efc695ec3eba6bc7996337341b0eef1c2df145b2def05\": rpc error: code = NotFound desc = could not find container \"3c7f7de4139372ff539efc695ec3eba6bc7996337341b0eef1c2df145b2def05\": container with ID starting with 3c7f7de4139372ff539efc695ec3eba6bc7996337341b0eef1c2df145b2def05 not found: ID does not exist" Nov 25 09:08:35 crc kubenswrapper[4482]: I1125 09:08:35.837870 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b2addd4-e5c8-42bc-a660-03a0dca18b37" path="/var/lib/kubelet/pods/9b2addd4-e5c8-42bc-a660-03a0dca18b37/volumes" Nov 25 09:08:36 crc kubenswrapper[4482]: I1125 09:08:36.830848 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:08:36 crc kubenswrapper[4482]: E1125 09:08:36.831294 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:08:48 crc kubenswrapper[4482]: I1125 09:08:48.832022 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:08:48 crc kubenswrapper[4482]: E1125 09:08:48.832823 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:09:00 crc kubenswrapper[4482]: I1125 09:09:00.830681 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:09:00 crc kubenswrapper[4482]: E1125 09:09:00.831564 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:09:11 crc kubenswrapper[4482]: I1125 09:09:11.830731 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:09:11 crc kubenswrapper[4482]: E1125 09:09:11.831391 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:09:24 crc kubenswrapper[4482]: I1125 09:09:24.831257 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:09:24 crc kubenswrapper[4482]: E1125 09:09:24.831809 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:09:36 crc kubenswrapper[4482]: I1125 09:09:36.831687 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:09:36 crc kubenswrapper[4482]: E1125 09:09:36.832843 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:09:50 crc kubenswrapper[4482]: I1125 09:09:50.831520 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:09:50 crc kubenswrapper[4482]: E1125 09:09:50.832442 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:10:03 crc kubenswrapper[4482]: I1125 09:10:03.831808 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:10:03 crc kubenswrapper[4482]: E1125 09:10:03.832388 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:10:06 crc kubenswrapper[4482]: I1125 09:10:06.473137 4482 generic.go:334] "Generic (PLEG): container finished" podID="2d7f601b-273a-4af7-8c8f-a6c60ebf212b" containerID="82592a3065f5995230540d32b2108ef2ebc524bb729162a8af220009996b856a" exitCode=0 Nov 25 09:10:06 crc kubenswrapper[4482]: I1125 09:10:06.473224 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"2d7f601b-273a-4af7-8c8f-a6c60ebf212b","Type":"ContainerDied","Data":"82592a3065f5995230540d32b2108ef2ebc524bb729162a8af220009996b856a"} Nov 25 09:10:07 crc kubenswrapper[4482]: I1125 09:10:07.958822 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.124987 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-ssh-key\") pod \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.125056 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-config-data\") pod \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.125137 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-ca-certs\") pod \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.125320 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-openstack-config\") pod \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.125574 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k67l4\" (UniqueName: \"kubernetes.io/projected/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-kube-api-access-k67l4\") pod \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.125656 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-test-operator-ephemeral-temporary\") pod \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.125712 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.125787 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-openstack-config-secret\") pod \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.125823 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-config-data" (OuterVolumeSpecName: "config-data") pod "2d7f601b-273a-4af7-8c8f-a6c60ebf212b" (UID: "2d7f601b-273a-4af7-8c8f-a6c60ebf212b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.125850 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-test-operator-ephemeral-workdir\") pod \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\" (UID: \"2d7f601b-273a-4af7-8c8f-a6c60ebf212b\") " Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.126603 4482 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.127655 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2d7f601b-273a-4af7-8c8f-a6c60ebf212b" (UID: "2d7f601b-273a-4af7-8c8f-a6c60ebf212b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.130969 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2d7f601b-273a-4af7-8c8f-a6c60ebf212b" (UID: "2d7f601b-273a-4af7-8c8f-a6c60ebf212b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.131471 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-kube-api-access-k67l4" (OuterVolumeSpecName: "kube-api-access-k67l4") pod "2d7f601b-273a-4af7-8c8f-a6c60ebf212b" (UID: "2d7f601b-273a-4af7-8c8f-a6c60ebf212b"). InnerVolumeSpecName "kube-api-access-k67l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.131694 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2d7f601b-273a-4af7-8c8f-a6c60ebf212b" (UID: "2d7f601b-273a-4af7-8c8f-a6c60ebf212b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.152922 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2d7f601b-273a-4af7-8c8f-a6c60ebf212b" (UID: "2d7f601b-273a-4af7-8c8f-a6c60ebf212b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.154625 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2d7f601b-273a-4af7-8c8f-a6c60ebf212b" (UID: "2d7f601b-273a-4af7-8c8f-a6c60ebf212b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.157508 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2d7f601b-273a-4af7-8c8f-a6c60ebf212b" (UID: "2d7f601b-273a-4af7-8c8f-a6c60ebf212b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.175567 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2d7f601b-273a-4af7-8c8f-a6c60ebf212b" (UID: "2d7f601b-273a-4af7-8c8f-a6c60ebf212b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.229783 4482 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.229818 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k67l4\" (UniqueName: \"kubernetes.io/projected/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-kube-api-access-k67l4\") on node \"crc\" DevicePath \"\"" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.229834 4482 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.231013 4482 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.231049 4482 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.231064 4482 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.231079 4482 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.231089 4482 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d7f601b-273a-4af7-8c8f-a6c60ebf212b-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.270493 4482 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.332688 4482 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.495871 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"2d7f601b-273a-4af7-8c8f-a6c60ebf212b","Type":"ContainerDied","Data":"aff0e6bcad370e6346ed512679533c1190ed0e4616768b3963c70b5568abf1cb"} Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.495938 4482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aff0e6bcad370e6346ed512679533c1190ed0e4616768b3963c70b5568abf1cb" Nov 25 09:10:08 crc kubenswrapper[4482]: I1125 09:10:08.496194 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.671434 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 09:10:14 crc kubenswrapper[4482]: E1125 09:10:14.672549 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7f601b-273a-4af7-8c8f-a6c60ebf212b" containerName="tempest-tests-tempest-tests-runner" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.672563 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7f601b-273a-4af7-8c8f-a6c60ebf212b" containerName="tempest-tests-tempest-tests-runner" Nov 25 09:10:14 crc kubenswrapper[4482]: E1125 09:10:14.672596 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2addd4-e5c8-42bc-a660-03a0dca18b37" containerName="extract-content" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.672603 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2addd4-e5c8-42bc-a660-03a0dca18b37" containerName="extract-content" Nov 25 09:10:14 crc kubenswrapper[4482]: E1125 09:10:14.672622 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2addd4-e5c8-42bc-a660-03a0dca18b37" containerName="extract-utilities" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.672630 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2addd4-e5c8-42bc-a660-03a0dca18b37" containerName="extract-utilities" Nov 25 09:10:14 crc kubenswrapper[4482]: E1125 09:10:14.672658 4482 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2addd4-e5c8-42bc-a660-03a0dca18b37" containerName="registry-server" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.672664 4482 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2addd4-e5c8-42bc-a660-03a0dca18b37" containerName="registry-server" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.672861 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2addd4-e5c8-42bc-a660-03a0dca18b37" containerName="registry-server" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.672872 4482 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7f601b-273a-4af7-8c8f-a6c60ebf212b" containerName="tempest-tests-tempest-tests-runner" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.673685 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.677835 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.678975 4482 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rldsl" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.870705 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1a2fac4-94ad-4f78-aabe-d9a09d8b2c8a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.870861 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhfl\" (UniqueName: \"kubernetes.io/projected/b1a2fac4-94ad-4f78-aabe-d9a09d8b2c8a-kube-api-access-2lhfl\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1a2fac4-94ad-4f78-aabe-d9a09d8b2c8a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.972901 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lhfl\" (UniqueName: \"kubernetes.io/projected/b1a2fac4-94ad-4f78-aabe-d9a09d8b2c8a-kube-api-access-2lhfl\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1a2fac4-94ad-4f78-aabe-d9a09d8b2c8a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.973035 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1a2fac4-94ad-4f78-aabe-d9a09d8b2c8a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.974906 4482 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1a2fac4-94ad-4f78-aabe-d9a09d8b2c8a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.992133 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lhfl\" (UniqueName: \"kubernetes.io/projected/b1a2fac4-94ad-4f78-aabe-d9a09d8b2c8a-kube-api-access-2lhfl\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1a2fac4-94ad-4f78-aabe-d9a09d8b2c8a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:10:14 crc kubenswrapper[4482]: I1125 09:10:14.995554 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1a2fac4-94ad-4f78-aabe-d9a09d8b2c8a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:10:15 crc kubenswrapper[4482]: I1125 09:10:15.292780 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:10:15 crc kubenswrapper[4482]: I1125 09:10:15.697095 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 09:10:15 crc kubenswrapper[4482]: I1125 09:10:15.700755 4482 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:10:15 crc kubenswrapper[4482]: I1125 09:10:15.842064 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:10:15 crc kubenswrapper[4482]: E1125 09:10:15.842506 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:10:16 crc kubenswrapper[4482]: I1125 09:10:16.573618 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b1a2fac4-94ad-4f78-aabe-d9a09d8b2c8a","Type":"ContainerStarted","Data":"aaca4ae4fec83321d497c983d813ae41663ae98a58a48b72af0547ebd4ced1eb"} Nov 25 09:10:17 crc kubenswrapper[4482]: I1125 09:10:17.583531 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b1a2fac4-94ad-4f78-aabe-d9a09d8b2c8a","Type":"ContainerStarted","Data":"29d7766b562368cec0a645dd9dddbbe2c3441b39d5f2bd420cc79af2f74a7acd"} Nov 25 09:10:17 crc kubenswrapper[4482]: I1125 09:10:17.603625 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.457849769 podStartE2EDuration="3.603607481s" podCreationTimestamp="2025-11-25 09:10:14 +0000 UTC" firstStartedPulling="2025-11-25 09:10:15.700012672 +0000 UTC m=+8590.188243930" lastFinishedPulling="2025-11-25 09:10:16.845770383 +0000 UTC m=+8591.334001642" observedRunningTime="2025-11-25 09:10:17.597508185 +0000 UTC m=+8592.085739444" watchObservedRunningTime="2025-11-25 09:10:17.603607481 +0000 UTC m=+8592.091838761" Nov 25 09:10:27 crc kubenswrapper[4482]: I1125 09:10:27.831531 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:10:27 crc kubenswrapper[4482]: E1125 09:10:27.832379 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:10:39 crc kubenswrapper[4482]: I1125 09:10:39.831552 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:10:39 crc kubenswrapper[4482]: E1125 09:10:39.832528 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:10:54 crc kubenswrapper[4482]: I1125 09:10:54.832491 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:10:54 crc kubenswrapper[4482]: E1125 09:10:54.833535 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:11:06 crc kubenswrapper[4482]: I1125 09:11:06.830925 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:11:06 crc kubenswrapper[4482]: E1125 09:11:06.831778 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:11:18 crc kubenswrapper[4482]: I1125 09:11:18.831010 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:11:18 crc kubenswrapper[4482]: E1125 09:11:18.832024 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:11:30 crc kubenswrapper[4482]: I1125 09:11:30.831360 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:11:30 crc kubenswrapper[4482]: E1125 09:11:30.832141 4482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4qzz_openshift-machine-config-operator(46a7d6ef-c931-4f15-893b-c9436d6de1f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" podUID="46a7d6ef-c931-4f15-893b-c9436d6de1f5" Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.667223 4482 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-khl2t"] Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.672571 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.677611 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khl2t"] Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.813033 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtktx\" (UniqueName: \"kubernetes.io/projected/5df1903f-195f-462c-a111-bcd549fb420e-kube-api-access-xtktx\") pod \"redhat-operators-khl2t\" (UID: \"5df1903f-195f-462c-a111-bcd549fb420e\") " pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.813186 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df1903f-195f-462c-a111-bcd549fb420e-catalog-content\") pod \"redhat-operators-khl2t\" (UID: \"5df1903f-195f-462c-a111-bcd549fb420e\") " pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.813243 4482 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df1903f-195f-462c-a111-bcd549fb420e-utilities\") pod \"redhat-operators-khl2t\" (UID: \"5df1903f-195f-462c-a111-bcd549fb420e\") " pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.915881 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtktx\" (UniqueName: \"kubernetes.io/projected/5df1903f-195f-462c-a111-bcd549fb420e-kube-api-access-xtktx\") pod \"redhat-operators-khl2t\" (UID: \"5df1903f-195f-462c-a111-bcd549fb420e\") " pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.915959 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df1903f-195f-462c-a111-bcd549fb420e-catalog-content\") pod \"redhat-operators-khl2t\" (UID: \"5df1903f-195f-462c-a111-bcd549fb420e\") " pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.915989 4482 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df1903f-195f-462c-a111-bcd549fb420e-utilities\") pod \"redhat-operators-khl2t\" (UID: \"5df1903f-195f-462c-a111-bcd549fb420e\") " pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.916574 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df1903f-195f-462c-a111-bcd549fb420e-catalog-content\") pod \"redhat-operators-khl2t\" (UID: \"5df1903f-195f-462c-a111-bcd549fb420e\") " pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.916933 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df1903f-195f-462c-a111-bcd549fb420e-utilities\") pod \"redhat-operators-khl2t\" (UID: \"5df1903f-195f-462c-a111-bcd549fb420e\") " pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.939005 4482 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtktx\" (UniqueName: \"kubernetes.io/projected/5df1903f-195f-462c-a111-bcd549fb420e-kube-api-access-xtktx\") pod \"redhat-operators-khl2t\" (UID: \"5df1903f-195f-462c-a111-bcd549fb420e\") " pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:40 crc kubenswrapper[4482]: I1125 09:11:40.987855 4482 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:41 crc kubenswrapper[4482]: I1125 09:11:41.466521 4482 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khl2t"] Nov 25 09:11:41 crc kubenswrapper[4482]: I1125 09:11:41.832161 4482 scope.go:117] "RemoveContainer" containerID="32869d20e16d14b6c846acacb93d89343ff3e73d0a01f1ed0ba839763f23d2e5" Nov 25 09:11:42 crc kubenswrapper[4482]: I1125 09:11:42.372294 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4qzz" event={"ID":"46a7d6ef-c931-4f15-893b-c9436d6de1f5","Type":"ContainerStarted","Data":"7b521e6409c242dca877e9b2568e51ddd5a7ce0a45f5d4c4fbe25e5781187bd6"} Nov 25 09:11:42 crc kubenswrapper[4482]: I1125 09:11:42.374977 4482 generic.go:334] "Generic (PLEG): container finished" podID="5df1903f-195f-462c-a111-bcd549fb420e" containerID="2609bbdf0471309bd3cbb9e4c7efb8fcb652f3c8f9c5db20cc074a93fd7ddecc" exitCode=0 Nov 25 09:11:42 crc kubenswrapper[4482]: I1125 09:11:42.375072 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khl2t" event={"ID":"5df1903f-195f-462c-a111-bcd549fb420e","Type":"ContainerDied","Data":"2609bbdf0471309bd3cbb9e4c7efb8fcb652f3c8f9c5db20cc074a93fd7ddecc"} Nov 25 09:11:42 crc kubenswrapper[4482]: I1125 09:11:42.375135 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khl2t" event={"ID":"5df1903f-195f-462c-a111-bcd549fb420e","Type":"ContainerStarted","Data":"c9fc81deab2974f42c4e0001122fb470f3ede9801a4423e90727b2ba31cf8dd1"} Nov 25 09:11:43 crc kubenswrapper[4482]: I1125 09:11:43.394301 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khl2t" event={"ID":"5df1903f-195f-462c-a111-bcd549fb420e","Type":"ContainerStarted","Data":"7ab48e07e19d9d5b153e57662be4f0115e9d0bed497967231b0e539c25edd1cc"} Nov 25 09:11:46 crc kubenswrapper[4482]: I1125 09:11:46.428021 4482 generic.go:334] "Generic (PLEG): container finished" podID="5df1903f-195f-462c-a111-bcd549fb420e" containerID="7ab48e07e19d9d5b153e57662be4f0115e9d0bed497967231b0e539c25edd1cc" exitCode=0 Nov 25 09:11:46 crc kubenswrapper[4482]: I1125 09:11:46.428126 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khl2t" event={"ID":"5df1903f-195f-462c-a111-bcd549fb420e","Type":"ContainerDied","Data":"7ab48e07e19d9d5b153e57662be4f0115e9d0bed497967231b0e539c25edd1cc"} Nov 25 09:11:47 crc kubenswrapper[4482]: I1125 09:11:47.441005 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khl2t" event={"ID":"5df1903f-195f-462c-a111-bcd549fb420e","Type":"ContainerStarted","Data":"a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325"} Nov 25 09:11:47 crc kubenswrapper[4482]: I1125 09:11:47.460692 4482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-khl2t" podStartSLOduration=2.917855584 podStartE2EDuration="7.460680727s" podCreationTimestamp="2025-11-25 09:11:40 +0000 UTC" firstStartedPulling="2025-11-25 09:11:42.377019693 +0000 UTC m=+8676.865250953" lastFinishedPulling="2025-11-25 09:11:46.919844837 +0000 UTC m=+8681.408076096" observedRunningTime="2025-11-25 09:11:47.456443652 +0000 UTC m=+8681.944674911" watchObservedRunningTime="2025-11-25 09:11:47.460680727 +0000 UTC m=+8681.948911986" Nov 25 09:11:50 crc kubenswrapper[4482]: I1125 09:11:50.988979 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:50 crc kubenswrapper[4482]: I1125 09:11:50.990565 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:11:52 crc kubenswrapper[4482]: I1125 09:11:52.034971 4482 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-khl2t" podUID="5df1903f-195f-462c-a111-bcd549fb420e" containerName="registry-server" probeResult="failure" output=< Nov 25 09:11:52 crc kubenswrapper[4482]: timeout: failed to connect service ":50051" within 1s Nov 25 09:11:52 crc kubenswrapper[4482]: > Nov 25 09:12:01 crc kubenswrapper[4482]: I1125 09:12:01.030342 4482 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:12:01 crc kubenswrapper[4482]: I1125 09:12:01.073276 4482 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:12:01 crc kubenswrapper[4482]: I1125 09:12:01.271073 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khl2t"] Nov 25 09:12:02 crc kubenswrapper[4482]: I1125 09:12:02.593738 4482 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-khl2t" podUID="5df1903f-195f-462c-a111-bcd549fb420e" containerName="registry-server" containerID="cri-o://a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325" gracePeriod=2 Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.194206 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.234263 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtktx\" (UniqueName: \"kubernetes.io/projected/5df1903f-195f-462c-a111-bcd549fb420e-kube-api-access-xtktx\") pod \"5df1903f-195f-462c-a111-bcd549fb420e\" (UID: \"5df1903f-195f-462c-a111-bcd549fb420e\") " Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.234464 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df1903f-195f-462c-a111-bcd549fb420e-utilities\") pod \"5df1903f-195f-462c-a111-bcd549fb420e\" (UID: \"5df1903f-195f-462c-a111-bcd549fb420e\") " Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.234543 4482 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df1903f-195f-462c-a111-bcd549fb420e-catalog-content\") pod \"5df1903f-195f-462c-a111-bcd549fb420e\" (UID: \"5df1903f-195f-462c-a111-bcd549fb420e\") " Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.244650 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df1903f-195f-462c-a111-bcd549fb420e-kube-api-access-xtktx" (OuterVolumeSpecName: "kube-api-access-xtktx") pod "5df1903f-195f-462c-a111-bcd549fb420e" (UID: "5df1903f-195f-462c-a111-bcd549fb420e"). InnerVolumeSpecName "kube-api-access-xtktx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.245348 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df1903f-195f-462c-a111-bcd549fb420e-utilities" (OuterVolumeSpecName: "utilities") pod "5df1903f-195f-462c-a111-bcd549fb420e" (UID: "5df1903f-195f-462c-a111-bcd549fb420e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.297383 4482 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df1903f-195f-462c-a111-bcd549fb420e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5df1903f-195f-462c-a111-bcd549fb420e" (UID: "5df1903f-195f-462c-a111-bcd549fb420e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.337793 4482 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df1903f-195f-462c-a111-bcd549fb420e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.337830 4482 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df1903f-195f-462c-a111-bcd549fb420e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.337843 4482 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtktx\" (UniqueName: \"kubernetes.io/projected/5df1903f-195f-462c-a111-bcd549fb420e-kube-api-access-xtktx\") on node \"crc\" DevicePath \"\"" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.602279 4482 generic.go:334] "Generic (PLEG): container finished" podID="5df1903f-195f-462c-a111-bcd549fb420e" containerID="a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325" exitCode=0 Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.602327 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khl2t" event={"ID":"5df1903f-195f-462c-a111-bcd549fb420e","Type":"ContainerDied","Data":"a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325"} Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.602361 4482 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khl2t" event={"ID":"5df1903f-195f-462c-a111-bcd549fb420e","Type":"ContainerDied","Data":"c9fc81deab2974f42c4e0001122fb470f3ede9801a4423e90727b2ba31cf8dd1"} Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.602382 4482 scope.go:117] "RemoveContainer" containerID="a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.603206 4482 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khl2t" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.619725 4482 scope.go:117] "RemoveContainer" containerID="7ab48e07e19d9d5b153e57662be4f0115e9d0bed497967231b0e539c25edd1cc" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.631737 4482 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khl2t"] Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.638030 4482 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-khl2t"] Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.664348 4482 scope.go:117] "RemoveContainer" containerID="2609bbdf0471309bd3cbb9e4c7efb8fcb652f3c8f9c5db20cc074a93fd7ddecc" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.683812 4482 scope.go:117] "RemoveContainer" containerID="a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325" Nov 25 09:12:03 crc kubenswrapper[4482]: E1125 09:12:03.684344 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325\": container with ID starting with a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325 not found: ID does not exist" containerID="a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.684382 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325"} err="failed to get container status \"a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325\": rpc error: code = NotFound desc = could not find container \"a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325\": container with ID starting with a69bd463bdb5062ae4c361c0c137851321dad1653ef519b30c959de218cf5325 not found: ID does not exist" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.684406 4482 scope.go:117] "RemoveContainer" containerID="7ab48e07e19d9d5b153e57662be4f0115e9d0bed497967231b0e539c25edd1cc" Nov 25 09:12:03 crc kubenswrapper[4482]: E1125 09:12:03.684666 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ab48e07e19d9d5b153e57662be4f0115e9d0bed497967231b0e539c25edd1cc\": container with ID starting with 7ab48e07e19d9d5b153e57662be4f0115e9d0bed497967231b0e539c25edd1cc not found: ID does not exist" containerID="7ab48e07e19d9d5b153e57662be4f0115e9d0bed497967231b0e539c25edd1cc" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.684751 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ab48e07e19d9d5b153e57662be4f0115e9d0bed497967231b0e539c25edd1cc"} err="failed to get container status \"7ab48e07e19d9d5b153e57662be4f0115e9d0bed497967231b0e539c25edd1cc\": rpc error: code = NotFound desc = could not find container \"7ab48e07e19d9d5b153e57662be4f0115e9d0bed497967231b0e539c25edd1cc\": container with ID starting with 7ab48e07e19d9d5b153e57662be4f0115e9d0bed497967231b0e539c25edd1cc not found: ID does not exist" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.684821 4482 scope.go:117] "RemoveContainer" containerID="2609bbdf0471309bd3cbb9e4c7efb8fcb652f3c8f9c5db20cc074a93fd7ddecc" Nov 25 09:12:03 crc kubenswrapper[4482]: E1125 09:12:03.685082 4482 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2609bbdf0471309bd3cbb9e4c7efb8fcb652f3c8f9c5db20cc074a93fd7ddecc\": container with ID starting with 2609bbdf0471309bd3cbb9e4c7efb8fcb652f3c8f9c5db20cc074a93fd7ddecc not found: ID does not exist" containerID="2609bbdf0471309bd3cbb9e4c7efb8fcb652f3c8f9c5db20cc074a93fd7ddecc" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.685159 4482 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2609bbdf0471309bd3cbb9e4c7efb8fcb652f3c8f9c5db20cc074a93fd7ddecc"} err="failed to get container status \"2609bbdf0471309bd3cbb9e4c7efb8fcb652f3c8f9c5db20cc074a93fd7ddecc\": rpc error: code = NotFound desc = could not find container \"2609bbdf0471309bd3cbb9e4c7efb8fcb652f3c8f9c5db20cc074a93fd7ddecc\": container with ID starting with 2609bbdf0471309bd3cbb9e4c7efb8fcb652f3c8f9c5db20cc074a93fd7ddecc not found: ID does not exist" Nov 25 09:12:03 crc kubenswrapper[4482]: I1125 09:12:03.839923 4482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df1903f-195f-462c-a111-bcd549fb420e" path="/var/lib/kubelet/pods/5df1903f-195f-462c-a111-bcd549fb420e/volumes"